<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.2 20120330//EN" "http://jats.nlm.nih.gov/publishing/1.2/JATS-journalpublishing1.dtd">
<!--<?xml-stylesheet type="text/xsl" href="article.xsl"?>-->
<article article-type="research-article" dtd-version="1.2" xml:lang="en" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<front>
<journal-meta>
<journal-id journal-id-type="issn">2397-1835</journal-id>
<journal-title-group>
<journal-title>Glossa: a journal of general linguistics</journal-title>
</journal-title-group>
<issn pub-type="epub">2397-1835</issn>
<publisher>
<publisher-name>Open Library of Humanities</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.16995/glossa.18539</article-id>
<article-categories>
<subj-group>
<subject>Research article</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Rethinking linguistic feedback: A modality-agnostic and holistic approach to multimodal addressee signals in spoken and signed dyadic interaction</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<contrib-id contrib-id-type="orcid">https://orcid.org/0000-0002-4630-4590</contrib-id>
<name>
<surname>Bauer</surname>
<given-names>Anastasia</given-names>
</name>
<email>anastasia.bauer@uni-koeln.de</email>
<xref ref-type="aff" rid="aff-1">1</xref>
</contrib>
<contrib contrib-type="author">
<contrib-id contrib-id-type="orcid">https://orcid.org/0000-0003-0972-1683</contrib-id>
<name>
<surname>Gipper</surname>
<given-names>Sonja</given-names>
</name>
<email>sonja.gipper@uni-koeln.de</email>
<xref ref-type="aff" rid="aff-1">1</xref>
</contrib>
<contrib contrib-type="author">
<contrib-id contrib-id-type="orcid">https://orcid.org/0000-0002-1756-7420</contrib-id>
<name>
<surname>Herrmann</surname>
<given-names>Tobias-Alexander</given-names>
</name>
<email>t.herrmann@uni-koeln.de</email>
<xref ref-type="aff" rid="aff-2">2</xref>
</contrib>
<contrib contrib-type="author">
<contrib-id contrib-id-type="orcid">https://orcid.org/0000-0002-5762-8734</contrib-id>
<name>
<surname>Hosemann</surname>
<given-names>Jana</given-names>
</name>
<email>jhoseman@uni-koeln.de</email>
<xref ref-type="aff" rid="aff-3">3</xref>
</contrib>
</contrib-group>
<aff id="aff-1"><label>1</label>Department of Linguistics, University of Cologne, Germany</aff>
<aff id="aff-2"><label>2</label>Slavic Institute, University of Cologne, Germany</aff>
<aff id="aff-3"><label>3</label>Sign Language Interpreting, University of Cologne, Germany</aff>
<pub-date publication-format="electronic" date-type="pub" iso-8601-date="2026-02-16">
<day>16</day>
<month>02</month>
<year>2026</year>
</pub-date>
<pub-date pub-type="collection">
<year>2026</year>
</pub-date>
<volume>11</volume>
<issue>1</issue>
<fpage>1</fpage>
<lpage>50</lpage>
<permissions>
<copyright-statement>Copyright: &#x00A9; 2026 The Author(s)</copyright-statement>
<copyright-year>2026</copyright-year>
<license license-type="open-access" xlink:href="http://creativecommons.org/licenses/by/4.0/">
<license-p>This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International License (CC-BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. See <uri xlink:href="http://creativecommons.org/licenses/by/4.0/">http://creativecommons.org/licenses/by/4.0/</uri>.</license-p>
</license>
</permissions>
<self-uri xlink:href="https://www.glossa-journal.org/articles/10.16995/glossa.18539/"/>
<abstract>
<p>In this paper, we investigate multimodal recipient feedback in casual dyadic conversation in four languages: German Sign Language, Russian Sign Language, spoken German, and spoken Russian. Taking a modality-agnostic and holistic approach, we investigate the composition of conversational feedback from different multimodal signals, comparing sign and spoken languages without prioritizing any of the articulators or modalities. We find that in sign and spoken languages alike, feedback events include non-manual signals such as head movements or facial expressions in 85% or more of the instances. Across modalities, all four languages show only small percentages of feedback events without any non-manual elements, and head nods constitute the most frequent feedback signal. Moreover, we model three empirically observed <italic>feedback styles</italic> ranging from a style employing a rich array of non-manual signals, over one comprising mostly head movements, to a style relying somewhat more on the talk-oriented forms. Our data demonstrate that the basic infrastructure for feedback is shared among signers and speakers, while at the same time, signers and speakers show different probabilities for using one style or another. On the basis of these patterns, we propose a gradient model of feedback styles that generates testable predictions for future work. Our study emphasizes the importance of investigating interactional phenomena from a holistic, multi- and cross-modal perspective. As vocal and manual signals account only for a relatively small percentage of the feedback signals employed by the signers and speakers in our study, a linguistic theory that focuses solely on vocal and/or manual behavior remains incomplete and fails to account for the largest part of feedback in conversation. This study highlights that non-manual signals are fundamental to feedback and conversation more broadly, and argues that theories of language must be reconceptualized, as purely speech-based accounts fail to capture the full complexity of human interaction. The findings have broader implications for theories of interaction and the Language Faculty: they underscore the need for models that integrate visual, non-manual, and interactional dimensions as constitutive elements of linguistic behavior. By highlighting the centrality of multimodal articulators in feedback production, this work contributes to a more comprehensive theory of human communicative interaction.</p>
</abstract>
</article-meta>
</front>
<body>
<sec>
<title>1 Introduction</title>
<p>When conversing, human interactants take turns (<xref ref-type="bibr" rid="B104">Sacks et al. 1974</xref>), thereby constantly changing roles between signer/speaker and recipient. While the current signer/speaker is the one who has the floor to convey some message to the recipient, the latter is by no means inactive. Rather, recipients in signed and spoken conversations are known to constantly provide the signer/speaker with feedback, for instance in the form of manual signs (e.g., <sc>yes</sc>), vocalizations (e.g., <italic>mhm</italic>), head nods, or smiles (<xref ref-type="bibr" rid="B132">Yngve 1970</xref>; <xref ref-type="bibr" rid="B26">Brunner 1979</xref>; <xref ref-type="bibr" rid="B5">Allwood et al. 1992</xref>; <xref ref-type="bibr" rid="B16">Bavelas et al. 2000</xref>; <xref ref-type="bibr" rid="B46">Gardner 2001</xref>; <xref ref-type="bibr" rid="B30">Cerrato &amp; Skhiri 2003</xref>; <xref ref-type="bibr" rid="B89">Mesch 2016</xref>; <xref ref-type="bibr" rid="B133">Zellers 2021</xref>; <xref ref-type="bibr" rid="B38">Dingemanse et al. 2022</xref>; <xref ref-type="bibr" rid="B82">Lutzenberger et al. 2024</xref>). This is illustrated in (1)<xref ref-type="fn" rid="n1">1</xref> taken from the DGS Corpus (<xref ref-type="bibr" rid="B56">Hanke et al. 2020</xref>) where signer B is nodding her head multiple times by way of providing feedback to A:</p>
<list list-type="gloss">
<list-item>
<list list-type="wordfirst">
<list-item><p>(1)</p></list-item>
</list>
</list-item>
<list-item>
<list list-type="sentence-gloss">
<list-item>
<list list-type="final-sentence">
<list-item><p><italic>DGS example illustrating an addressee head nod (DGS Corpus; <xref ref-type="bibr" rid="B56">Hanke et al. 2020</xref>). A short contextualized clip is available at <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://doi.org/10.6084/m9.figshare.30738701">https://doi.org/10.6084/m9.figshare.30738701</ext-link>, and the full video can be viewed in the DGS Corpus <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://www.sign-lang.uni-hamburg.de/meinedgs/html/1427158-11470746-12015917_en.html">here</ext-link> (timestamp: 00:03:13.622)</italic>.</p></list-item>
<list-item><p><inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="glossa-11-18539-g10.png"/></p></list-item>
</list>
</list-item>
</list>
</list-item>
</list>
<p>Feedback signals<xref ref-type="fn" rid="n2">2</xref> fulfill a broad range of conversational functions: they may display the addressee&#8217;s active participation, their understanding of the preceding contribution, or their capacity and willingness to go on with the conversation (<xref ref-type="bibr" rid="B5">Allwood et al. 1992</xref>). They may also give an evaluation of the content offered by the other interactant (<xref ref-type="bibr" rid="B123">Uhmann 1996</xref>), show affiliation with the signer/speaker (<xref ref-type="bibr" rid="B116">Stivers 2008</xref>), convey the presence or absence of conversational trouble (<xref ref-type="bibr" rid="B109">Schegloff 1982</xref>), and indicate whether a longer conversational unit is considered ongoing or completed (<xref ref-type="bibr" rid="B76">Koole &amp; Gosen 2024</xref>). By providing feedback, recipients actively participate in shaping the signer&#8217;s/speaker&#8217;s talk (<xref ref-type="bibr" rid="B118">Tolins &amp; Fox Tree 2014</xref>), where inadequate feedback can lead to the deterioration of the teller&#8217;s performance (<xref ref-type="bibr" rid="B16">Bavelas et al. 2000</xref>) or to the production of repair sequences (<xref ref-type="bibr" rid="B28">Byun et al. 2018</xref>). Moreover, the way a person gives feedback influences how others perceive their personality (<xref ref-type="bibr" rid="B22">Blomsma et al. 2022</xref>), and there is even some evidence that the feedback style of a person indeed shows a relationship to their personality traits (<xref ref-type="bibr" rid="B19">Bendel Larcher 2021</xref>).</p>
<p>All this suggests that feedback constitutes a vital mechanism in human communication and cognition. Given its centrality, feedback can provide us with a window into the mechanisms of human social interaction in general. Any theory of language and communication must therefore be able to account for feedback phenomena in conversation.</p>
<p>The forms of feedback signals, their frequencies as well as their typical conversational employment vary across languages, across varieties of the same language, and even across individuals using the same language (<xref ref-type="bibr" rid="B127">White 1989</xref>; <xref ref-type="bibr" rid="B87">Maynard 1990</xref>; <xref ref-type="bibr" rid="B119">Tottie 1991</xref>; <xref ref-type="bibr" rid="B32">Clancy et al. 1996</xref>; <xref ref-type="bibr" rid="B117">Stubbe 1998</xref>; <xref ref-type="bibr" rid="B36">Dideriksen et al. 2023</xref>; <xref ref-type="bibr" rid="B23">Blomsma et al. 2024</xref>). However, striking similarities have also been noted, both among spoken languages as well as between sign and spoken languages. In spoken languages, the employment of a form containing a nasal consonant such as <italic>mhm</italic> is pervasive across languages from different families and with different typological profiles (<xref ref-type="bibr" rid="B38">Dingemanse et al. 2022</xref>). Regarding similarities between sign and spoken languages, feedback signals in both British Sign Language and British English have been shown to rely to a great extent on head movements (<xref ref-type="bibr" rid="B82">Lutzenberger et al. 2024</xref>).</p>
<p>With language existing in at least three combinations of modalities&#8212;spoken/auditory, signed/visual, and signed/tactile&#8212;there is a strong need to study feedback across modalities using similar approaches. Today it is widely acknowledged that human language in its primary co-present context is a fundamentally multimodal phenomenon (<xref ref-type="bibr" rid="B51">Goodwin 1986</xref>; <xref ref-type="bibr" rid="B13">Bavelas 1990</xref>; <xref ref-type="bibr" rid="B71">Kendon 2004</xref>; <xref ref-type="bibr" rid="B126">Vigliocco et al. 2014</xref>; <xref ref-type="bibr" rid="B1">Abner et al. 2015</xref>; <xref ref-type="bibr" rid="B91">Mondada 2016</xref>; <xref ref-type="bibr" rid="B69">Keevallik 2018</xref>; <xref ref-type="bibr" rid="B98">Perniss 2018</xref>; <xref ref-type="bibr" rid="B64">Holler &amp; Levinson 2019</xref>; <xref ref-type="bibr" rid="B97">&#214;zy&#252;rek 2021</xref>; <xref ref-type="bibr" rid="B101">Rasenberg et al. 2022</xref>; <xref ref-type="bibr" rid="B73">Kendrick et al. 2023</xref>; <xref ref-type="bibr" rid="B106">Sandler 2024</xref>) that involves the coordination of various articulators (<xref ref-type="bibr" rid="B64">Holler &amp; Levinson 2019</xref>; <xref ref-type="bibr" rid="B97">&#214;zy&#252;rek 2021</xref>). The same is true for feedback (<xref ref-type="bibr" rid="B29">Cassell &amp; Thorisson 1999</xref>; <xref ref-type="bibr" rid="B2">Allwood &amp; Cerrato 2003</xref>; <xref ref-type="bibr" rid="B3">Allwood et al. 2007a</xref>; <xref ref-type="bibr" rid="B4">b</xref>; <xref ref-type="bibr" rid="B20">Bertrand et al. 2007</xref>; <xref ref-type="bibr" rid="B93">Navarretta &amp; Paggio 2010</xref>; <xref ref-type="bibr" rid="B122">Truong et al. 2011</xref>; <xref ref-type="bibr" rid="B94">Navarretta &amp; Paggio 2012</xref>; <xref ref-type="bibr" rid="B83">Malisz et al. 2016</xref>; <xref ref-type="bibr" rid="B101">Rasenberg et al. 2022</xref>; <xref ref-type="bibr" rid="B25">Boudin et al. 2024</xref>). Nevertheless, research adopting a holistic multimodal perspective on feedback remains scarce, and our understanding of how vocal and visual, manual and non-manual signals combine into complex recipient feedback in everyday conversation is still limited, particularly comparing sign and spoken languages.</p>
<p>With the current study, we address this gap by investigating how feedback varies in form and frequency in signed and spoken languages, expanding upon previous observations in the literature. We examine feedback from a multimodal and cross-linguistic perspective by utilizing corpora of casual conversations from four different languages: German Sign Language (DGS), Russian Sign Language (RSL), spoken German (GER), and spoken Russian (RUS) (<xref ref-type="bibr" rid="B62">Hoffmann &amp; Himmelmann 2009</xref>; <xref ref-type="bibr" rid="B27">Burkova 2015</xref>; <xref ref-type="bibr" rid="B75">Konrad et al. 2020</xref>; <xref ref-type="bibr" rid="B12">Bauer &amp; Poryadin 2023</xref>; <xref ref-type="bibr" rid="B9">Bauer 2023</xref>). Our focus is on possible compositions of what we call <italic>feedback events</italic>, which consist of multiple signals produced with different articulators (see Section 3.2). We take into account various signals which can be produced during feedback, including, e.g. words like <italic>ja</italic> &#8216;yes&#8217;, manual signs such as DGS <sc>stimmt</sc> &#8216;right&#8217; or mouthings like <italic>okay<xref ref-type="fn" rid="n3">3</xref></italic>, vocalizations like <italic>mhm</italic> and non-manual signals such as head nods, eyebrow raises, smiles and others. Using parallel annotation and analysis, we annotated at least 43 minutes of co-present dyadic conversations<xref ref-type="fn" rid="n4">4</xref> in each of the four languages and identified ca. 1,900 instances of feedback. Crucially, our approach is modality-agnostic, meaning that we analyze feedback signals and events without privileging one articulator over another. We are inspired by the work of Hodge et al. (<xref ref-type="bibr" rid="B61">2023</xref>), who conducted a modality-agnostic comparison of quotatives in Auslan (Australian Sign Language) and the spoken language Matukar Panau (Oceanic). While Hodge et al. (<xref ref-type="bibr" rid="B61">2023</xref>) examined different articulators available in sign and spoken languages (e.g., mouthing only in Auslan, speech only in Matukar Panau), the present study extends this approach by introducing a methodological framework that enables an integrated comparison of these types of signals. Specifically, we group two articulators together (hands and mouth) and classify manual signs, mouthings, spoken words, and vocalizations under the unified category of <italic>talk</italic>. This allows us to compare sign and spoken languages without overemphasizing differences imposed by the constraints of their respective modalities&#8212;a limitation also identified by Hodge et al. (<xref ref-type="bibr" rid="B61">2023</xref>)<xref ref-type="fn" rid="n5">5</xref>.</p>
<p>Our data show similarities between signed and spoken language modalities in the architecture of feedback events, as most feedback events (85% or more) involve non-manual signals such as head and/or facial movements in all four languages. These feedback events were produced either non-manually alone or in combination with signed/spoken elements across languages. Moreover, in all four languages, the most frequent feedback event design is that of a multiple head nod without any additional signal, emphasizing the importance of head nods in feedback in the four languages examined. We interpret these findings as contributing to the accumulating evidence supporting the existence of a shared interactional infrastructure of conversation among both signers and speakers (<xref ref-type="bibr" rid="B82">Lutzenberger et al. 2024</xref>).</p>
<p>Despite frequent reference to multimodality in contemporary linguistic discourse, the full complexity of multimodal human communication remains largely underrepresented in many prevailing linguistic and cognitive theories, which often rely on unimodal conceptions of language. However, there are notable exceptions, such as the recent work by Cohn &amp; Schilperoord (<xref ref-type="bibr" rid="B34">2024</xref>). A comprehensive linguistic theory must account for language as a multimodal system, encompassing both the vocal-auditory and gestural-visual articulators, and must situate these within the broader framework of human cognition.</p>
</sec>
<sec>
<title>2 Previous research on multimodal feedback in sign and spoken languages</title>
<p>Conversational feedback has been referred to by various terms in the literature, the most common being <italic>backchannels</italic> (<xref ref-type="bibr" rid="B132">Yngve 1970</xref>), <italic>listener</italic> or <italic>minimal responses</italic> (<xref ref-type="bibr" rid="B60">Hess &amp; Johnston 1988</xref>; <xref ref-type="bibr" rid="B17">Bavelas et al. 2002</xref>; <xref ref-type="bibr" rid="B45">Fujimoto 2009</xref>), and <italic>reactive</italic> or <italic>response tokens</italic> (<xref ref-type="bibr" rid="B46">Gardner 2001</xref>; <xref ref-type="bibr" rid="B88">McCarthy 2003</xref>; <xref ref-type="bibr" rid="B131">Xu 2016</xref>). The term <italic>feedback</italic>, as employed in this study, was introduced by Allwood et al. (<xref ref-type="bibr" rid="B5">1992</xref>). Despite ongoing terminological differences (<xref ref-type="bibr" rid="B113">Simon 2018</xref>), there is a general agreement among researchers that feedback signals must be distinguished based on the pragmatic or communicative functions they serve. For instance, some utterances may signal active participation, others may acknowledge and agree with what has been stated, while others might treat new information as newsworthy or provide an evaluative comment (see <xref ref-type="fig" rid="F1">Figure 1</xref>).</p>
<p>Prior to 2000, research predominantly focused on vocal responses, such as <italic>mm, yeah</italic>, or <italic>okay</italic>, primarily in spoken English (<xref ref-type="bibr" rid="B18">Beach 1993</xref>; <xref ref-type="bibr" rid="B41">Drummond &amp; Hopper 1993</xref>; <xref ref-type="bibr" rid="B68">Jefferson 1993</xref>). However, some researchers recognized that feedback encompassed more than oral behavior, drawing attention to visual signals. Dittmann &amp; Llewellyn (<xref ref-type="bibr" rid="B40">1968</xref>) are among the first to acknowledge the relationship between vocal responses and head nods during feedback, and Yngve (<xref ref-type="bibr" rid="B132">1970</xref>) already emphasizes the importance of investigating video instead of audio data for the study of feedback in conversation. Brunner (<xref ref-type="bibr" rid="B26">1979</xref>) and Jefferson (<xref ref-type="bibr" rid="B67">1984</xref>) highlight smiles and laughter in conversation, while other linguists include various head movements in interactions in the same category as vocal expressions like <italic>uh, yeah</italic>, and co-completions or requests for clarification (<xref ref-type="bibr" rid="B70">Kendon 1967</xref>; <xref ref-type="bibr" rid="B42">Duncan 1974</xref>; <xref ref-type="bibr" rid="B54">Hadar et al. 1985</xref>). These studies initiated a tradition of studying feedback from a multimodal perspective.</p>
<p>Although the potential of (combining) visual signals in feedback-giving is vast, most research has concentrated on individual feedback signals from a single articulator (<xref ref-type="bibr" rid="B2">Allwood &amp; Cerrato 2003</xref>; <xref ref-type="bibr" rid="B20">Bertrand et al. 2007</xref>; <xref ref-type="bibr" rid="B65">H&#246;mke et al. 2017</xref>; <xref ref-type="bibr" rid="B72">Kendrick &amp; Holler 2017</xref>). While some studies recognize the role of various articulators, such as for example head movements and smiles, they often do not integrate these into a holistic analysis of feedback (<xref ref-type="bibr" rid="B16">Bavelas et al. 2000</xref>; <xref ref-type="bibr" rid="B46">Gardner 2001</xref>; <xref ref-type="bibr" rid="B80">Lindblad &amp; Allwood 2015</xref>; <xref ref-type="bibr" rid="B49">Gironzetti et al. 2016</xref>; <xref ref-type="bibr" rid="B83">Malisz et al. 2016</xref>). Few studies have concurrently examined multiple feedback signals. Blomsma et al. (<xref ref-type="bibr" rid="B23">2024</xref>), for example, analyze a variety of facial gestures across multiple addressees, but their study is limited to a single spoken language and does not involve real human&#8211;human interaction.</p>
<p>In comparison to research on spoken languages, studies on feedback mechanisms in sign languages remain relatively scarce. Existing literature has primarily focused on repair mechanisms, documenting them in Argentine Sign Language (<xref ref-type="bibr" rid="B85">Manrique &amp; Enfield 2015</xref>; <xref ref-type="bibr" rid="B84">Manrique 2016</xref>), Swiss German Sign Language (<xref ref-type="bibr" rid="B48">Girard-Groeber 2015</xref>), Norwegian Sign Language (NTS) (<xref ref-type="bibr" rid="B114">Skedsmo 2020</xref>), Balinese homesign (<xref ref-type="bibr" rid="B105">Safar &amp; De Vos 2022</xref>), Providence Island Sign Language (Omardeen <xref ref-type="bibr" rid="B95">2023</xref>), British Sign Language (BSL) (<xref ref-type="bibr" rid="B82">Lutzenberger et al. 2024</xref>), and in cross-signing contexts, where Deaf signers of different sign languages meet for the first time (<xref ref-type="bibr" rid="B28">Byun et al. 2018</xref>).</p>
<p>With respect to non-repair feedback, there is much less research on sign languages. Backer (<xref ref-type="bibr" rid="B8">1977</xref>) offers a brief description of what she terms <italic>regulators</italic> in a small corpus of American Sign Language (ASL), building upon the work by Wiener &amp; Devoe (<xref ref-type="bibr" rid="B129">1974</xref>) who made a systematic description of those behaviors in the visual, vocal, postural, and gestural articulators that signal and/or monitor the initiation, maintenance, and termination of spoken messages. Backer (<xref ref-type="bibr" rid="B8">1977</xref>) differentiates between feedback signals that initiate a turn (such as an increase in size and quantity of head-nodding, movement of hands out of rest position, i.e. indexing, touching, or waving hand in front of interlocutor, gaze) and feedback signals produced in passive recipiency (gaze, head nodding, smiling, postural shifts, facial activity expressing surprise, agreement, uncertainty, lack of understanding, etc.) or short repetitions of some of the interlocutor&#8217;s signs. Subsequent research by Coates &amp; Sutton-Spence (<xref ref-type="bibr" rid="B33">2001</xref>) further classified turn-taking regulation in sign language interactions into two categories: non-manual and manual. This distinction seems important for future research, as non-manual elements rather than manual signs appear to play a more critical role in conveying feedback (<xref ref-type="bibr" rid="B82">Lutzenberger et al. 2024</xref>), a pattern also observed in the present study.</p>
<p>Mesch (<xref ref-type="bibr" rid="B89">2016</xref>) reports for the first time on backchannels signals in Swedish Sign Language (STS), noting that manual backchannels (such as palm-up, <sc>yes, index, agree, exactly</sc>) are quite rare and often produced in the signer&#8217;s lap. STS signers predominantly use non-manual backchannel signals such as nodding, head-shaking, smiling, changes in posture, nose wrinkling, or widened eyes to signal feedback. In her analysis of 35 minutes of STS dialogues involving 16 Deaf signers, Mesch (<xref ref-type="bibr" rid="B89">2016</xref>) focuses primarily on manual backchannels, which generally consist of one to three signs/gestures in STS, with palm-up and <sc>yes</sc> being particularly frequent.</p>
<p>A recent study by B&#246;rstell (<xref ref-type="bibr" rid="B24">2024</xref>) also focuses on manual feedback in STS, specifically on continuers. Using the approach proposed by Dingemanse et al. (<xref ref-type="bibr" rid="B38">2022</xref>), B&#246;rstell (<xref ref-type="bibr" rid="B24">2024</xref>) examines continuer candidates within a subset of the STS corpus. This study supports the findings by Mesch (<xref ref-type="bibr" rid="B89">2016</xref>) that the two manual elements <sc>yes</sc> and palm-up are the most frequent manual backachannels in STS. Similar to Mesch (<xref ref-type="bibr" rid="B89">2016</xref>), B&#246;rstell (<xref ref-type="bibr" rid="B24">2024</xref>) excludes non-manual signals due to the limited annotation of non-manual expressions in the dataset.</p>
<p>Fenlon et al. (<xref ref-type="bibr" rid="B44">2013</xref>) examined gender and age differences in turn length and the frequency of backchannels in BSL dyadic conversations. Contrary to earlier studies on spoken languages (<xref ref-type="bibr" rid="B42">Duncan 1974</xref>; <xref ref-type="bibr" rid="B21">Bilous &amp; Krauss 1988</xref>), they found no significant differences between gender and age groups in the time spent on manual or non-manual backchannels.</p>
<p>Lutzenberger et al. (<xref ref-type="bibr" rid="B82">2024</xref>) provide the first cross-linguistic comparison of feedback in a signed and a spoken language. Their recent study on repairs and continuers in BSL and British English revealed similarities in discourse management strategies among signers and speakers who share a common cultural background. They observe that the interactional infrastructure used by both signers and speakers predominantly relies on behaviors of the head, face, and body&#8212;alone or combined with what they call &#8216;verbal&#8217; elements (spoken words or manual signs)&#8212;while solely &#8216;verbal&#8217; strategies are rare, similar to what was found by Mesch (<xref ref-type="bibr" rid="B89">2016</xref>) earlier.</p>
<p>In DGS (German Sign Language), head nods have been found to play a crucial role in interaction and even exhibit distinct phonetic characteristics when used as feedback. This was demonstrated in a recent study by Bauer et al. (<xref ref-type="bibr" rid="B10">2024</xref>), who used OpenPose to analyze the kinematic properties of head nods, revealing that feedback nods are slower and smaller in amplitude than affirmative nods.</p>
<p>However, research on feedback in signed conversations is still in its early stages. This may be due, in part, to a longstanding manual bias in the study of sign languages (<xref ref-type="bibr" rid="B99">Puupponen 2019</xref>). Sign language linguistics has largely focused on lexical, phonological, and morpho-syntactic structures, often overlooking the interactive dimensions of communication (<xref ref-type="bibr" rid="B78">Lepeut &amp; Shaw 2022</xref>). Yet interaction consists of composite utterances (<xref ref-type="bibr" rid="B71">Kendon 2004</xref>), in which non-manual actions combine with manual and/or vocal actions&#8212;a perspective that has received little attention in sign language research to date.</p>
<p>The majority of all existing studies on feedback in sign languages has concentrated on manual backchannels, also due to the challenges of annotating non-manual signals. Additionally, most research has focused on continuers, as these are more readily identifiable compared to other feedback types. Our study seeks to address this gap by examining the full range of multimodal feedback, encompassing various feedback types produced by different articulators (see Section 3.1 for an explanation of the various types of feedback).</p>
</sec>
<sec>
<title>3 The current study</title>
<p>The literature summarized in Section 2 suggests that there are striking similarities between signed and spoken languages with respect to the composition of feedback, in that head movements are very frequently involved in both types of languages. However, differences are also apparent: where spoken languages employ speech (including both lexical and non-lexical tokens such as <italic>yeah</italic> or <italic>mhm</italic>), feedback in sign languages sometimes contains signs such as <sc>yes</sc> or gestures as palm-up which are often signed at a location low in the signing space (<xref ref-type="bibr" rid="B89">Mesch 2016</xref>; <xref ref-type="bibr" rid="B24">B&#246;rstell 2024</xref>). The existence of nasal feedback signals such as <italic>mhm</italic> in spoken languages and low-signed signals in sign languages suggests that speakers and signers alike strive to minimize the effort of production and reduce the potential intrusiveness of feedback (<xref ref-type="bibr" rid="B38">Dingemanse et al. 2022</xref>; <xref ref-type="bibr" rid="B82">Lutzenberger et al. 2024</xref>; <xref ref-type="bibr" rid="B24">B&#246;rstell 2024</xref>).</p>
<p>However, the differences summarized above are in fact due to constraints of the particular modality (see also <xref ref-type="bibr" rid="B125">Vandenitte 2023</xref>). Although sign language users may potentially employ nasal vocalizations, these are not visually perceptible. Likewise, it is obvious that spoken languages do not possess lexical manual signs. How, then, can we compare sign and spoken languages in a meaningful way without overemphasizing these modality-based constraints? We suggest that this is possible with a modality-agnostic (<xref ref-type="bibr" rid="B61">Hodge et al. 2023</xref>) approach to feedback (see Section 3.2).</p>
<p>Also, in previous research on conversational feedback, the focus often lies on one single type of feedback signal, e.g., manual signs and gestures (<xref ref-type="bibr" rid="B89">Mesch 2016</xref>), vocalizations (<xref ref-type="bibr" rid="B46">Gardner 2001</xref>; <xref ref-type="bibr" rid="B133">Zellers 2021</xref>), nodding (<xref ref-type="bibr" rid="B116">Stivers 2008</xref>), or smiles (<xref ref-type="bibr" rid="B26">Brunner 1979</xref>). More holistic approaches looking at the multimodal composition of feedback are rarer (<xref ref-type="bibr" rid="B80">Lindblad &amp; Allwood 2015</xref>; <xref ref-type="bibr" rid="B82">Lutzenberger et al. 2024</xref>). Our study is grounded in such holistic approaches, as our multimodal and modality-agnostic perspective on feedback assumes that feedback can consist of multiple layers conveyed through different articulators.</p>
<p>In order to develop such an approach, in Section 3.2 we redefine feedback in a way that allows us to investigate it from two perspectives: a holistic perspective, taking the composition of instances of feedback into account, and an atomic perspective looking at different articulators involved in formulating feedback separately. Before we start presenting our modality-agnostic and holistic approach, in Section 3.1 we briefly discuss the delimitation of the phenomenon of feedback as implemented in our study.</p>
<sec>
<title>3.1 Delimiting feedback</title>
<p>In order to make transparent which kinds of signals we include in our study, in the following we propose a schema of conversational feedback that allows for an integration of different types of signals found in different languages, while at the same time making clear distinctions between signals that are oriented toward indicating the presence vs. the absence of trouble.</p>
<p>The first important distinction we draw is between feedback and other kinds of responses in interaction. Feedback events are fundamentally responsive in that they are by definition related to some preceding talk by another interactant. However, it is crucial to distinguish feedback from other types of responses: while feedback events can be and are regularly solicited by the current signer or speaker, it is crucial that they are not made conditionally relevant in the same way as, for instance, answers to questions. This means that, in principle, the recipient can decide about the placement of feedback signals and may even withhold relevant feedback (<xref ref-type="bibr" rid="B109">Schegloff 1982: 86</xref>). Although the lack of feedback can lead to conversational failure (<xref ref-type="bibr" rid="B16">Bavelas et al. 2000</xref>), which is also the case for conditionally relevant responses in that they are, if withheld, &#8220;officially absent&#8221; (<xref ref-type="bibr" rid="B108">Schegloff 1968: 1083</xref>), the consequences are different: Conditionally relevant responses&#8212;answers to questions, acceptance of invitations, etc.&#8212;are responses that are specifically relevant at the particular point in time, with restrictions on possible responses set by the initial utterance. Feedback, in contrast, may be relevant at any point in a conversation, due to the fact that communicative trouble may arise at any time and, consequently, make the initiation of repair&#8212;or the passing of that opportunity&#8212;necessary (<xref ref-type="bibr" rid="B109">Schegloff 1982</xref>). Thus, the conditions on sequential placement are very different for conditionally relevant responses and feedback. This is illustrated by Example (2) showing a question&#8211;answer pair, and Example (3) showing the use of a feedback signal. In (2), interactant A poses a polar question in the first line, which interactant B answers in the second line. The question restricts B&#8217;s possibilities for responding, as an answer of the type yes/no is made conditionally relevant by the question. B responds to the question by means of the response particle <italic>ja</italic> &#8216;yes&#8217;.</p>
<list list-type="gloss">
<list-item>
<list list-type="wordfirst">
<list-item><p>(2)</p></list-item>
</list>
</list-item>
<list-item>
<list list-type="sentence-gloss">
<list-item>
<list list-type="final-sentence">
<list-item><p><italic>Spoken German example illustrating a question&#8211;answer sequence from M&#252;nsterKorpus_DB (<xref ref-type="bibr" rid="B62">Hoffmann &amp; Himmelmann 2009</xref>)</italic>.</p></list-item>
<list-item><p><inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="glossa-11-18539-g11.png"/></p></list-item>
</list>
</list-item>
</list>
</list-item>
</list>
<p>In (3), in contrast, A offers a piece of information to B in the first line. This action does not make any particular response relevant, so B can choose to provide feedback or withhold it. The possibilities for responding are thus not restricted in the same way as by the question in (2). This does not mean that feedback may not be preferred over silence in this context, it just means that it is not conditionally relevant in the same way as an answer to a question. Speaker B can also choose the type of feedback she provides. In this case, she chooses a verbal feedback token in the form of the response particle <italic>ja</italic> &#8216;yes&#8217;. Here we can observe the multifunctionality of response particles: <italic>ja</italic> &#8216;yes&#8217; is employed to formulate an affirmative answer to a polar question in (2), and as a continuer in (3).</p>
<list list-type="gloss">
<list-item>
<list list-type="wordfirst">
<list-item><p>(3)</p></list-item>
</list>
</list-item>
<list-item>
<list list-type="sentence-gloss">
<list-item>
<list list-type="final-sentence">
<list-item><p><italic>Spoken German example illustrating a sequence with feedback from M&#252;nsterKorpus_DB (<xref ref-type="bibr" rid="B62">Hoffmann &amp; Himmelmann 2009</xref>)</italic>.</p></list-item>
<list-item><p><inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="glossa-11-18539-g12.png"/></p></list-item>
</list>
</list-item>
</list>
</list-item>
</list>
<p>Another important distinction to draw is that between different types of feedback, based on whether it deals with some conversational trouble (repair) or indicates a lack thereof (non-repair feedback) (see <xref ref-type="fig" rid="F1">Figure 1</xref>).</p>
<fig id="F1">
<caption>
<p><bold>Figure 1:</bold> Feedback strategies.</p>
</caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="glossa-11-18539-g1.png"/>
</fig>
<p><xref ref-type="fig" rid="F1">Figure 1</xref> illustrates various feedback strategies. Although repair mechanisms (<xref ref-type="bibr" rid="B37">Dingemanse &amp; Enfield 2015</xref>; <xref ref-type="bibr" rid="B39">Dingemanse et al. 2015</xref>) are included in the figure for the sake of completeness, they are excluded from further discussion, as they are not the focus of this study. Instead, we investigate conversational moves that do not initiate or constitute repair, but rather imply that repair is unnecessary. One of the most well-known and widely studied types of feedback is what is often referred to as a continuer. Continuers convey at least the basic interactional function of passing on the opportunity for initiating repair (<xref ref-type="bibr" rid="B109">Schegloff 1982</xref>)&#8212;see Example (3) above.<xref ref-type="fn" rid="n6">6</xref> Moreover, feedback signals called &#8216;newsmarks&#8217;, in addition, explicitly treat the information given by the preceding interactant as new and mark it as &#8216;remarkable&#8217; (<xref ref-type="bibr" rid="B86">Marmorstein &amp; Szczepek Reed 2023</xref>). In this category, we include non-repetitional requests for reconfirmation (<xref ref-type="bibr" rid="B47">Gipper et al. 2023</xref>) such as German <italic>echt?</italic> &#8216;really?&#8217;, see Example (4), as well as change-of-state tokens (<xref ref-type="bibr" rid="B58">Heritage 1984</xref>) such as <italic>ah</italic>, see Example (5).</p>
<list list-type="gloss">
<list-item>
<list list-type="wordfirst">
<list-item><p>(4)</p></list-item>
</list>
</list-item>
<list-item>
<list list-type="sentence-gloss">
<list-item>
<list list-type="final-sentence">
<list-item><p><italic>Spoken German example of a newsmark from M&#252;nsterKorpus_DB (<xref ref-type="bibr" rid="B62">Hoffmann &amp; Himmelmann 2009</xref>)</italic>.</p></list-item>
<list-item><p><inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="glossa-11-18539-g13.png"/></p></list-item>
</list>
</list-item>
</list>
</list-item>
</list>
<list list-type="gloss">
<list-item>
<list list-type="wordfirst">
<list-item><p>(5)</p></list-item>
</list>
</list-item>
<list-item>
<list list-type="sentence-gloss">
<list-item>
<list list-type="final-sentence">
<list-item><p><italic>Spoken Russian example illustrating a newsmark from <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://doi.org/10.18716/dch/a.00000016">Russian Multimodal Conversation Corpus</ext-link> (<xref ref-type="bibr" rid="B12">Bauer &amp; Poryadin 2023</xref>). A short clip is available at <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://doi.org/10.6084/m9.figshare.30738701">https://doi.org/10.6084/m9.figshare.30738701</ext-link></italic>.</p></list-item>
<list-item><p><inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="glossa-11-18539-g14.png"/></p></list-item>
</list>
</list-item>
</list>
</list-item>
</list>
<p>Lastly, there are feedback signals that overtly indicate some kind of evaluation of the preceding information, &#8216;assessments&#8217; (<xref ref-type="bibr" rid="B123">Uhmann 1996</xref>), as in Example (6).</p>
<list list-type="gloss">
<list-item>
<list list-type="wordfirst">
<list-item><p>(6)</p></list-item>
</list>
</list-item>
<list-item>
<list list-type="sentence-gloss">
<list-item>
<list list-type="final-sentence">
<list-item><p><italic>RSL example illustrating a feedback event comprising a manual sign and multiple small head nods functioning as an assessment (<ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="http://rsl.nstu.ru">the RSL Corpus</ext-link>; <xref ref-type="bibr" rid="B27">Burkova 2015</xref>). A short contextualized clip is available at <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://doi.org/10.6084/m9.figshare.30738701">https://doi.org/10.6084/m9.figshare.30738701</ext-link>, and the full video can be viewed in <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="http://rsl.nstu.ru">the RSL Corpus</ext-link> after registration (RSLN-d-s23-s24, timestamp 00:00:05.965)</italic>.</p></list-item>
<list-item><p><inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="glossa-11-18539-g15.png"/></p></list-item>
</list>
</list-item>
</list>
</list-item>
</list>
<p>We include these three types of feedback events&#8212;continuers, newsmarks, and assessments&#8212;in our investigation, regardless of their sequential position (second, i.e., following a volunteered initial utterance, or third, i.e., following a response made conditionally relevant) or their turn-taking properties (passive recipiency vs. incipient speakership, see <xref ref-type="bibr" rid="B107">Sbranna et al. 2022</xref>).</p>
<p>For this study, we did not include repetitions (see, e.g., the first part in Example (6)), as at least for German it has been shown that they tend to fulfill relatively marked actions when used in requests for reconfirmation (<xref ref-type="bibr" rid="B47">Gipper et al. 2023</xref>). Given that it is not clear whether this is also true for the other three languages, we chose to exclude repetitions and leave their investigation for future research, as they may not be fully comparable across the languages in our sample.</p>
</sec>
<sec>
<title>3.2 A modality-agnostic and holistic approach to feedback</title>
<p>In this paper, we take a modality-agnostic (<xref ref-type="bibr" rid="B61">Hodge et al. 2023</xref>) approach to the comparison of sign and spoken languages, an approach that looks at all components of a feedback event without privileging any of the articulators. Most of the signals functioning as feedback in our study, produced by the various articulators&#8212;head, eyebrows, eyes, nose, mouth gestures, cheeks, manual gestures, and shoulders&#8212;are comparable across sign and spoken languages. We thus observe that their use in feedback in signed languages is not qualitatively different from that in spoken languages. While we acknowledge that the meanings in feedback need not be the same for all articulators in the four languages, we start out with a descriptive and exploratory approach allowing for the possibility that signals are used in similar ways across languages. A detailed analysis of the exact meanings in feedback and other interactional functions will be an intriguing topic for future research. Our data show, for example, that both eyebrow raises and nose wrinkles are used in all four languages. For eyebrow raises, we can say that they are used in very similar ways in a newsmarking function, albeit with different frequencies. With regard to nose wrinkling, DGS shows markedly more nose wrinkles than the other languages (8% vs. 0.5% or less) (see <xref ref-type="table" rid="T8">Table 8</xref>). A nose wrinkle is known to convey the meaning &#8216;that&#8217;s right&#8217; in DGS interaction (<xref ref-type="bibr" rid="B59">Herrmann 2020</xref>), but its use in spoken German discourse has not been addressed in the literature. A fuller analysis of this difference is left for future research. Moreover, the plurifunctionality of non-manual gestures is a well-established phenomenon (<xref ref-type="bibr" rid="B7">Andries et al. 2023</xref>; <xref ref-type="bibr" rid="B96">Oomen &amp; Roelofsen 2023</xref>), and we acknowledge that many of the signals examined in this study may serve multiple functions simultaneously.</p>
<p>At the same time, we recognize that some articulators may be argued to differ across sign and spoken languages in the production of feedback signals. In sign languages, feedback may be conveyed through manual signs (see DGS Example (7) or RSL Example (6)) as well as mouthings (see DGS Example (7) and RSL Example (8)), mouth movements resembling spoken or written forms of the surrounding language (<xref ref-type="bibr" rid="B11">Bauer &amp; Kyuseva 2022</xref>). In contrast, spoken languages express feedback through spoken words (see Example (3)) or vocalizations (e.g., <italic>mhm</italic>) (<xref ref-type="bibr" rid="B38">Dingemanse et al. 2022</xref>).</p>
<list list-type="gloss">
<list-item>
<list list-type="wordfirst">
<list-item><p>(7)</p></list-item>
</list>
</list-item>
<list-item>
<list list-type="sentence-gloss">
<list-item>
<list list-type="final-sentence">
<list-item><p><italic>DGS example illustrating a feedback event containing a head nod, a manual sign and a mouthing (DGS Corpus; <xref ref-type="bibr" rid="B56">Hanke et al. 2020</xref>). A short clip is available at <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://doi.org/10.6084/m9.figshare.30738701">https://doi.org/10.6084/m9.figshare.30738701</ext-link>, and the full video can be viewed in the <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://www.sign-lang.uni-hamburg.de/meinedgs/html/1427158-11470746-12015917_de.html">DGS Corpus</ext-link>, timestamp 00:10:39.304)</italic>.</p></list-item>
<list-item><p><inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="glossa-11-18539-g16.png"/></p></list-item>
</list>
</list-item>
</list>
</list-item>
</list>
<list list-type="gloss">
<list-item>
<list list-type="wordfirst">
<list-item><p>(8)</p></list-item>
</list>
</list-item>
<list-item>
<list list-type="sentence-gloss">
<list-item>
<list list-type="final-sentence">
<list-item><p><italic>RSL example illustrating a feedback event containing a head nod and a mouthing</italic>(from the <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://doi.org/10.18716/dch/a.00000028">RSL Conversations Corpus</ext-link>; <xref ref-type="bibr" rid="B12">Bauer &amp; Poryadin 2023</xref>). A short clip is available at <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://doi.org/10.6084/m9.figshare.30738701">https://doi.org/10.6084/m9.figshare.30738701</ext-link>.</p></list-item>
<list-item><p><inline-graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="glossa-11-18539-g17.png"/></p></list-item>
</list>
</list-item>
</list>
</list-item>
</list>
<p>Classifying these signals on the basis of the articulators involved (hand, mouth, and mouth, respectively), would obscure the fact that these differences are based on modality-specific constraints for the two types of languages. Therefore, rather than classifying these three types of signals on the basis of the articulators with which they are produced, we classify them as one single category which we call <italic>talk<xref ref-type="fn" rid="n7">7</xref></italic> for all four languages, signed and spoken. This allows us to compare sign and spoken languages with respect to the extent to which they employ <italic>talk</italic> elements regardless of the modality.</p>
<p>In addition to this modality-agnostic approach, in the following we propose a novel <italic>holistic</italic> construal of conversational feedback, where different articulators are employed to produce signals that combine into what we call a <italic>feedback event</italic>&#8212;see <xref ref-type="fig" rid="F2">Figure 2</xref>. The figure shows a stretch of conversation between two interactants. The dark green longer bars represent turns, while the shorter bars indicate feedback events. Different colors represent different types of feedback, such as continuers or assessments, with varying durations. The zoom-in window illustrates how various signals may be combined within a single feedback event&#8212;some lasting the full duration of the event, others occurring only briefly.</p>
<fig id="F2">
<caption>
<p><bold>Figure 2:</bold> Feedback signals vs. feedback events.</p>
</caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="glossa-11-18539-g2.png"/>
</fig>
<p>As can be seen in the ELAN screenshot in <xref ref-type="fig" rid="F3">Figure 3</xref>, the person produces distinct signals such as a manual sign <sc>ja</sc> &#8216;<sc>yes</sc>&#8217; in her lap, a mouthing <italic>ah</italic>, a head nod (hnn), squinted eyes (esc) and a nose wrinkle (nw) with different articulators, such as the head, the eyes, the mouth and the nose. These collectively form a <italic>feedback event</italic>. While some feedback events may consist of a single signal, this study shows that they often comprise multiple simultaneous signals (see <xref ref-type="fig" rid="F7">Figure 7</xref>). So, instead of analyzing one signal, such as head nods, we claim that it is essential to take all (potential) articulators into account to get a broader understanding of the composition of feedback events. Crucially, this perspective does not entail that all signals that constitute a feedback event necessarily convey one single meaning. Rather, our approach focuses on temporal aspects of co-occurrence.</p>
<fig id="F3">
<caption>
<p><bold>Figure 3:</bold> A screenshot from ELAN showing an example of a multilayered feedback event in DGS (source: DGS corpus, Hanke et al. 2020, file koe_03_sachgebiete).</p>
</caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="glossa-11-18539-g3.png"/>
</fig>
<p>This redefinition allows us to investigate feedback from two perspectives, looking at the whole, potentially multi-layered feedback event, but also at its components. Moreover, it allows for a modality-agnostic approach to feedback where the employment of different articulators is compared across sign and spoken languages.</p>
</sec>
<sec>
<title>3.3 Research questions</title>
<p>In this study, we aim to contribute to our knowledge of the similarities and differences in the composition of feedback events between sign and spoken languages. For this purpose, we compare conversational feedback in four languages, two signed and two spoken, matched according to cultural background: German Sign Language (DGS), Russian Sign Language (RSL), spoken German (GER), and spoken Russian (RUS). We annotated feedback events in corpora of casual conversations according to our definition in Sections 3.1 and 3.2, developing a coding scheme based on previous research and our own findings during annotation (see the Appendix).</p>
<p>In order to investigate possible similarities and differences between the four languages, we employ a descriptive and exploratory approach. Research comparing sign and spoken languages is still too scarce to formulate meaningful hypotheses. Moreover, the formulation of hypotheses would pose unhelpful restrictions on our research, where an exploratory approach opens the possibility for unanticipated findings.</p>
<p>We start our research with the following questions:</p>
<list list-type="simple">
<list-item><p><bold>I.</bold> What are the typical components of feedback events across languages?</p>
<p>Previous research suggests that signed and spoken languages both rely to a large extent on head movements when expressing feedback events (<xref ref-type="bibr" rid="B82">Lutzenberger et al. 2024</xref>). Moreover, we know that non-manual signals such as head nods can combine with other signals in order to formulate multi-layered feedback events (<xref ref-type="bibr" rid="B40">Dittmann &amp; Llewellyn 1968</xref>). Building on this research, we aim to further investigate the components involved in the formulation of feedback events, with special attention paid to similarities and differences between sign and spoken languages.</p></list-item>
<list-item><p><bold>II.</bold> What is the role of language, language modality, cultural background, and individual signer/speaker in the formulation of feedback events?</p>
<p>As initial research suggests that the formulation of feedback events may in fact be quite similar in sign and spoken languages (<xref ref-type="bibr" rid="B82">Lutzenberger et al. 2024</xref>), we use our matched datasets to look at the question of whether feedback producers of one language are more similar to each other than to those of another language, or whether similarities and differences can rather be explained in terms of cultural background or language modality.</p></list-item>
</list>
</sec>
</sec>
<sec>
<title>4 Materials and methods</title>
<sec>
<title>4.1 Data</title>
<p>For all four languages, we investigated video-recordings of free mundane dyadic conversations, using four (DGS, RSL, RUS, GER) corpora. For each language, we included three such conversations. In almost all dyads, both participants were familiar with each other prior to the recording.<xref ref-type="fn" rid="n8">8</xref> In the Russian dataset, however, two participants met for the first time on the day of the recording. In the same dataset, one interactant features in two recordings, so we investigate data from five interactants for Russian and from six interactants for the other three languages. For each language, we annotated between 43 and 58 minutes of conversation. In line with our exploratory approach, we chose to annotate similar amounts of time in order to be able to compare languages with respect to how frequent feedback is.</p>
<p>Our DGS data has been taken from the Public DGS Corpus (<xref ref-type="bibr" rid="B56">Hanke et al. 2020</xref>). The DGS Corpus is an annotated reference corpus of German Sign Language, 50 hours of which have been made publicly available. Its 330 participants use DGS as their primary language of daily life and come from various regions of Germany (<xref ref-type="bibr" rid="B111">Schulder &amp; Hanke 2022</xref>). The DGS content analyzed and presented in this paper is drawn from release 3 of <italic>MY DGS &#8211; annotated</italic> (<xref ref-type="bibr" rid="B56">Hanke et al. 2020</xref>; <xref ref-type="bibr" rid="B75">Konrad et al. 2020</xref>), a research dataset that provides Public DGS Corpus recordings with full sign annotations and translations in German and English.</p>
<p>Our RSL data come from two corpora. One file was sourced from the RSL Online Corpus, developed by Svetlana Burkova and her team at Novosibirsk University (<xref ref-type="bibr" rid="B27">Burkova 2015</xref>). This corpus currently comprises over 230 recordings from 43 RSL signers (both men and women, aged between 18 and 63) including Deaf and Hard-of-Hearing individuals. To obtain authentic conversational data, we selected a 20-minute unprompted conversation between two Deaf signers recorded in Novosibirsk.</p>
<p>Due to the limited amount of interactional data in this corpus, we also used a second corpus of RSL conversations. The additional two casual dyadic conversations, lasting between 40 and 60 minutes, were sourced from Bauer &amp; Poryadin (<xref ref-type="bibr" rid="B12">2023</xref>) and feature Deaf native RSL signers. This recently compiled corpus includes data from signers who previously lived in St. Petersburg, &#268;ita, and &#268;erni&#353;ov before immigrating to Germany. Participants discussed a range of topics, including their lives in Russia before immigrating and their experiences as Deaf individuals in Europe and Russia.</p>
<p>Our German data constitute a subset of the M&#252;nster Korpus (<xref ref-type="bibr" rid="B62">Hoffmann &amp; Himmelmann 2009</xref>), an unpublished corpus of video-recorded conversations among students from different areas in Germany who speak Standard colloquial German as their first language. The conversations were recorded in the year 2009 in the city of M&#252;nster, Germany.</p>
<p>Our three spoken Russian conversations are part of a Russian Multimodal Conversation Corpus (<xref ref-type="bibr" rid="B12">Bauer &amp; Poryadin 2023</xref>). This corpus features dialogues among Russian immigrants lasting 40&#8211;60 minutes each. Participants, aged 20&#8211;30 years old, are native Russian speakers residing in Germany for no longer than 5 years.<xref ref-type="fn" rid="n9">9</xref></p>
<p>We aimed to use data that were maximally comparable across both interactional type (free, unprompted conversations between acquainted interlocutors) and recording setup (two participants). <xref ref-type="table" rid="T1">Table 1</xref> summarizes the sources, the amount of time and tokens for the data employed in this study. <xref ref-type="table" rid="T2">Table 2</xref> lists details for the different recordings.</p>
<table-wrap id="T1">
<caption>
<p><bold>Table 1:</bold> Summary of the data employed in this study, including language, sources, durations of the annotated data, number and gender of interactants, and counts of feedback events.</p>
</caption>
<table>
<tbody>
<tr>
<td align="left" valign="top"><bold>L<sc>ang</sc></bold>.</td>
<td align="left" valign="top"><bold>S<sc>ource</sc></bold></td>
<td align="left" valign="top"><bold>I<sc>nteractants</sc></bold></td>
<td align="left" valign="top"><bold>M<sc>in</sc></bold>.</td>
<td align="left" valign="top"><bold>E<sc>vents</sc></bold></td>
</tr>
<tr>
<td align="left" valign="top">DGS</td>
<td align="left" valign="top">Hanke et al. (<xref ref-type="bibr" rid="B56">2020</xref>); Konrad et al. (<xref ref-type="bibr" rid="B75">2020</xref>)</td>
<td align="left" valign="top">3 f, 3 m</td>
<td align="left" valign="top">48</td>
<td align="left" valign="top">585</td>
</tr>
<tr>
<td align="left" valign="top">RSL</td>
<td align="left" valign="top">Burkova (<xref ref-type="bibr" rid="B27">2015</xref>); Bauer &amp; Poryadin (<xref ref-type="bibr" rid="B12">2023</xref>)</td>
<td align="left" valign="top">2 f, 4 m</td>
<td align="left" valign="top">43</td>
<td align="left" valign="top">397</td>
</tr>
<tr>
<td align="left" valign="top">GER</td>
<td align="left" valign="top">Hoffmann &amp; Himmelmann (<xref ref-type="bibr" rid="B62">2009</xref>)</td>
<td align="left" valign="top">3 f, 3 m</td>
<td align="left" valign="top">45</td>
<td align="left" valign="top">525</td>
</tr>
<tr>
<td align="left" valign="top">RUS</td>
<td align="left" valign="top">Bauer &amp; Poryadin (<xref ref-type="bibr" rid="B12">2023</xref>)</td>
<td align="left" valign="top">4 f, 1 m</td>
<td align="left" valign="top">58</td>
<td align="left" valign="top">419</td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap id="T2">
<caption>
<p><bold>Table 2:</bold> Overview of annotated transcripts.</p>
</caption>
<table>
<tbody>
<tr>
<td align="left" valign="top"><bold>L<sc>ang</sc></bold>.</td>
<td align="left" valign="top"><bold>T<sc>ranscript</sc></bold></td>
<td align="left" valign="top"><bold>A<sc>ge</sc></bold></td>
<td align="left" valign="top"><bold>G<sc>ender</sc></bold></td>
<td align="left" valign="top"><bold>M<sc>in</sc></bold>.</td>
<td align="left" valign="top"><bold>S<sc>ource</sc></bold></td>
</tr>
<tr>
<td align="left" valign="top">DGS</td>
<td align="left" valign="top">koe_01_free_conversation</td>
<td align="left" valign="top">18&#8211;30</td>
<td align="left" valign="top">fm</td>
<td align="left" valign="top">14:51</td>
<td align="left" valign="top"><ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://www.sign-lang.uni-hamburg.de/meinedgs/landing/corpus-3.0-text-1427158-11470746-12015917_de.html">https:doi.org/m59x</ext-link></td>
</tr>
<tr>
<td align="left" valign="top"></td>
<td align="left" valign="top">koe_03_sachgebiete</td>
<td align="left" valign="top">18&#8211;30</td>
<td align="left" valign="top">fm</td>
<td align="left" valign="top">15:25</td>
<td align="left" valign="top"><ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://www.sign-lang.uni-hamburg.de/meinedgs/landing/corpus-3.0-text-1427725_de.html">https:doi.org/m59z</ext-link></td>
</tr>
<tr>
<td align="left" valign="top"></td>
<td align="left" valign="top">koe_04_free_conversation</td>
<td align="left" valign="top">18&#8211;30</td>
<td align="left" valign="top">fm</td>
<td align="left" valign="top">18:10</td>
<td align="left" valign="top"><ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://www.sign-lang.uni-hamburg.de/meinedgs/landing/corpus-3.0-text-1427810_de.html">https:doi.org/kz87</ext-link></td>
</tr>
<tr>
<td align="left" valign="top">RSL</td>
<td align="left" valign="top">RSLC_s3_s4_180423</td>
<td align="left" valign="top">31&#8211;45</td>
<td align="left" valign="top">fm</td>
<td align="left" valign="top">09:59</td>
<td align="left" valign="top"><ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://dch.phil-fak.uni-koeln.de/bestaende/datensicherung/russian-sign-language-conversations">https:doi.org/npzp</ext-link></td>
</tr>
<tr>
<td align="left" valign="top"></td>
<td align="left" valign="top">RSLN_d2_s8_s9</td>
<td align="left" valign="top">31&#8211;45</td>
<td align="left" valign="top">fm</td>
<td align="left" valign="top">19:52</td>
<td align="left" valign="top"><ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="http://rsl.nstu.ru/">http://rsl.nstu.ru</ext-link></td>
</tr>
<tr>
<td align="left" valign="top"></td>
<td align="left" valign="top">RSLC_s1_s2_180423</td>
<td align="left" valign="top">60+</td>
<td align="left" valign="top">mm</td>
<td align="left" valign="top">14:43</td>
<td align="left" valign="top"><ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://dch.phil-fak.uni-koeln.de/bestaende/datensicherung/russian-sign-language-conversations">https:doi.org/npzp</ext-link></td>
</tr>
<tr>
<td align="left" valign="top">GER</td>
<td align="left" valign="top">M&#252;nsterKorpus_UV</td>
<td align="left" valign="top">18&#8211;30</td>
<td align="left" valign="top">fm</td>
<td align="left" valign="top">16:30</td>
<td align="left" valign="top"></td>
</tr>
<tr>
<td align="left" valign="top"></td>
<td align="left" valign="top">M&#252;nsterKorpus_DB</td>
<td align="left" valign="top">18&#8211;30</td>
<td align="left" valign="top">fm</td>
<td align="left" valign="top">16:22</td>
<td align="left" valign="top"></td>
</tr>
<tr>
<td align="left" valign="top"></td>
<td align="left" valign="top">M&#252;nsterKorpus_LD</td>
<td align="left" valign="top">18&#8211;30</td>
<td align="left" valign="top">fm</td>
<td align="left" valign="top">13:07</td>
<td align="left" valign="top"></td>
</tr>
<tr>
<td align="left" valign="top">RUS</td>
<td align="left" valign="top">RCC_s1_s11_010923</td>
<td align="left" valign="top">18&#8211;30</td>
<td align="left" valign="top">fm</td>
<td align="left" valign="top">27:00</td>
<td align="left" valign="top"><ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://dch.phil-fak.uni-koeln.de/bestaende/datensicherung/russian-multimodal-conversational-data">https:doi.org/npzr</ext-link></td>
</tr>
<tr>
<td align="left" valign="top"></td>
<td align="left" valign="top">RCC_s12_s10_010923</td>
<td align="left" valign="top">18&#8211;30</td>
<td align="left" valign="top">ff</td>
<td align="left" valign="top">14:49</td>
<td align="left" valign="top"><ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://dch.phil-fak.uni-koeln.de/bestaende/datensicherung/russian-multimodal-conversational-data">https:doi.org/npzr</ext-link></td>
</tr>
<tr>
<td align="left" valign="top"></td>
<td align="left" valign="top">RCC_s1_s2_010923</td>
<td align="left" valign="top">18&#8211;30</td>
<td align="left" valign="top">ff</td>
<td align="left" valign="top">16:00</td>
<td align="left" valign="top"><ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://dch.phil-fak.uni-koeln.de/bestaende/datensicherung/russian-multimodal-conversational-data">https:doi.org/npzr</ext-link></td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec>
<title>4.2 Annotations</title>
<p>All data were annotated in ELAN (The Language Archive, MPI Nijmegen, The Netherlands; e.g. <xref ref-type="bibr" rid="B35">Crasborn &amp; Sloetjes 2008</xref>). We started out with an annotation scheme developed by the first author inspired by the RSL Corpus annotations (<xref ref-type="bibr" rid="B27">Burkova 2015</xref>). For each feedback event, we created an annotation on a separate tier in ELAN marking the length of the whole event. A feedback event may consist of a single or of multiple signals (see section 3.2). The length of a feedback event is defined by the start of the first signal involved and the end of the last signal. To separate one feedback event from the next feedback event, we determined two options. Either, two subsequent feedback events were separated by 300ms<xref ref-type="fn" rid="n10">10</xref> of no movements or talk related to giving feedback. Or, two different feedback events could occur consecutively, distinguished by a noticeable change in movement, shape, or direction&#8212;for instance, a head nod transitioning into a head shake. We then added annotations on separate tiers for each articulator involved as described in <xref ref-type="table" rid="T3">Table 3</xref>. A key distinction from the earlier modality-agnostic approach by Hodge et al. (<xref ref-type="bibr" rid="B61">2023</xref>) lies in our grouping of signs, words, vocalizations (e.g., <italic>mhm</italic>), and mouthings under a single category: <italic>talk</italic> (see Section 3.2). In addition, we introduce an annotation tier labeled feedback type, which specifies the design of each feedback event based on the articulators involved. This includes whether the feedback consists of non-manual signals only, <italic>talk</italic> only, manual gestures only, or a combination thereof. <xref ref-type="fig" rid="F3">Figure 3</xref> above shows a still from ELAN exemplifying our annotations and <xref ref-type="table" rid="T3">Table 3</xref> gives an overview of the annotation for various articulators.</p>
<table-wrap id="T3">
<caption>
<p><bold>Table 3:</bold> Overview of annotated articulators.</p>
</caption>
<table>
<tbody>
<tr>
<td align="left" valign="top"><bold>Articulators</bold></td>
<td align="left" valign="top"><bold>Description</bold></td>
</tr>
<tr>
<td align="left" valign="top">head</td>
<td align="left" valign="top">various head movements (e.g., nods, shakes, tilts)</td>
</tr>
<tr>
<td align="left" valign="top">eyebrows</td>
<td align="left" valign="top">eyebrow movements (raises, frowns)</td>
</tr>
<tr>
<td align="left" valign="top">eyes</td>
<td align="left" valign="top">eye behaviors (e.g., eyes squinted or widened)</td>
</tr>
<tr>
<td align="left" valign="top">nose</td>
<td align="left" valign="top">nose-related actions (esp. wrinkling)</td>
</tr>
<tr>
<td align="left" valign="top">cheeks</td>
<td align="left" valign="top">cheek movements (e.g., puffing)</td>
</tr>
<tr>
<td align="left" valign="top">mouth gesture</td>
<td align="left" valign="top">mouth actions besides mouthing (e.g., pursing lips, smiles)</td>
</tr>
<tr>
<td align="left" valign="top">shoulders</td>
<td align="left" valign="top">shoulder movements (e.g., shrugs)</td>
</tr>
<tr>
<td align="left" valign="top">manual gesture</td>
<td align="left" valign="top">hand/arm gestures (e.g., palm-up)</td>
</tr>
<tr>
<td align="left" valign="top"><italic>talk</italic> category</td>
<td align="left" valign="top">mouthings, signs, spoken words, and vocalizations</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>During the annotation process, we continuously refined the annotation scheme, incorporating additional variables and annotations. For instance, we reannotated smiles and laughter according to the Smiling Intensity Scale (<xref ref-type="bibr" rid="B49">Gironzetti et al. 2016</xref>). In this way, our annotation scheme is grounded both in the previous literature as well as in our data. As our coding scheme developed, we also performed various iterations of coding for all data. In addition, we did various rounds of corrections. In this way, most feedback events have been annotated by at least two annotators. Our coding scheme where all the tier values are explained in detail can be found in the Appendix. In total, we identified around 1,900 feedback events in our data, comprising roughly 3,500 feedback signals.</p>
<p>We faced challenges in annotating certain multimodal features; in particular, (mutual) eye gaze was excluded from the analysis. Annotating eye gaze in video data using ELAN proved difficult, leading to inconsistent results and hindering the integration of gaze into the analysis. To address this gap, future research will employ eye-tracking technology, which we plan to use to improve the accuracy of gaze annotations. Apart from the challenges with gaze, we did not encounter noticeable difficulties in identifying non-manual signals in our datasets, even though some dyads in these corpora (e.g., in the Russian and RSL data) were filmed from a more lateral camera angle than those in the DGS Corpus. Importantly, this did not result in a higher number of unclear annotation values for these dyads compared to the other languages.</p>
<p>In order to assess the consistency of the annotations across coders, we calculated inter-annotator agreement on the articulator most frequently involved in feedback&#8212;the head. To this end, we re-annotated a randomly selected subset of the data comprising roughly 50% of all annotated head movements (843 out of 1603) drawn from all languages. These items had not been previously annotated by the respective coder. The onsets, offsets, and durations of the annotations were pre-annotated by the authors and Deaf and hearing assistants and therefore held constant; only the head movement type and the feedback event type were annotated. The resulting two annotation sets were then compared for inter-rater agreement. We calculated Fleiss&#8217; generalized kappa in R version 4.5.1 (<xref ref-type="bibr" rid="B100">R Core Team 2025</xref>) (unweighted, 0.95 confidence level) with the function fleiss.kappa.raw() from the package irrCAC (<xref ref-type="bibr" rid="B53">Gwet 2019</xref>). The values show an inter-annotator agreement much higher than chance agreement. With 0.72, the kappa coefficient indicates substantial agreement between the two annotators (<xref ref-type="bibr" rid="B77">Landis &amp; Koch 1977: 165</xref>). While these inter-coder reliability calculations concern only the content of the annotations, not their temporal location or duration, the fact that all annotation procedures involved subsequent corrections provides additional assurance that the annotations reflect a reasonable degree of coder consensus. A further limitation that needs to be acknowledged is that the agreement score pertains exclusively to the head tier and does not include less frequent signals&#8212;such as eyebrow or mouth movements&#8212;which are known to be more challenging for annotators to classify consistently (<xref ref-type="bibr" rid="B43">Esselink et al. 2024</xref>).<xref ref-type="fn" rid="n11">11</xref></p>
<p>For the <italic>talk</italic> category, we classified the transcribed elements into subcategories based on their function. These categories are summarized in <xref ref-type="table" rid="T4">Table 4</xref>.</p>
<table-wrap id="T4">
<caption>
<p><bold>Table 4:</bold> Summary of subcategories of the <italic>talk</italic> category.</p>
</caption>
<table>
<tbody>
<tr>
<td align="left" valign="top"><bold>C<sc>ategory</sc></bold></td>
<td align="left" valign="top"><bold>I<sc>ncluded elements</sc></bold></td>
</tr>
<tr>
<td align="left" valign="top"><italic>yes</italic>-like elements</td>
<td align="left" valign="top">Equivalents of &#8216;yes&#8217; in sign, mouthing and speech; nasal response tokens like <italic>mhm</italic>, typically used in continuer and acknowledgement functions</td>
</tr>
<tr>
<td align="left" valign="top"><italic>ah</italic>-like elements</td>
<td align="left" valign="top">Mouthing and speech <italic>ah</italic>, other change-of-state elements like German <italic>ach so</italic> &#8216;ah ok&#8217;, typically used in newsmark function</td>
</tr>
<tr>
<td align="left" valign="top">Assessing elements</td>
<td align="left" valign="top">Evaluative adjectives like German <italic>interessant</italic> &#8216;interesting&#8217;</td>
</tr>
<tr>
<td align="left" valign="top">Other</td>
<td align="left" valign="top">Elements that did not fit the other categories</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec>
<title>4.3 Analysis</title>
<p>As our approach is completely exploratory, our analyses do not involve any hypothesis testing. All analyses were performed in R (<xref ref-type="bibr" rid="B100">R Core Team 2023</xref>). To investigate the variability between feedback producers of the same language and compare it with cross-linguistic differences, we created a heatmap dendrogram in order to expose the relative association between a feedback producer and an articulator. Our inspiration for employing this method for comparing signers and speakers stems from Hodge et al. (<xref ref-type="bibr" rid="B61">2023</xref>). The heatmap dendrogram in <xref ref-type="fig" rid="F6">Figure 6</xref> and the heatmaps in <xref ref-type="fig" rid="F8">Figures 8</xref> and <xref ref-type="fig" rid="F9">9</xref> were created with the function pheatmap() from the package pheatmap (<xref ref-type="bibr" rid="B74">Kolde 2019</xref>). In the heatmap dendrogram, hierarchical clustering is employed in order to identify clusters among the signers and speakers in our sample, as well as the annotated articulators. We did not scale the data in the heatmaps, as we aim to show the overall frequency of a certain articulator in the composition of feedback events of a certain interactant on the basis of the percentage of feedback events that contain that particular articulator. The ideal number of clusters for the data used in the heatmap dendrogram was calculated with the function NbClust()<xref ref-type="fn" rid="n12">12</xref> from the package NbClust (<xref ref-type="bibr" rid="B31">Charrad et al. 2014</xref>). The other plots were created with ggplot2 (<xref ref-type="bibr" rid="B128">Wickham 2016</xref>) and, in part, the GGally package (<xref ref-type="bibr" rid="B110">Schloerke et al. 2024</xref>).</p>
</sec>
</sec>
<sec>
<title>5 Results</title>
<sec>
<title>5.1 Head nods constitute the most frequent feedback signal in both sign and spoken languages</title>
<p>Across modalities, all four languages show only small percentages of feedback events without any non-manual elements. This is visualized in <xref ref-type="fig" rid="F4">Figure 4</xref>, showing for each language the percentages of different combinations of types of feedback signals. A large majority of feedback events in all four languages is constituted by or contains at least one non-manual signal. As some of the numbers are too small to be visible in the figure, we also provide the absolute and relative frequencies of the different feedback event configurations in <xref ref-type="table" rid="T5">Table 5</xref>.</p>
<fig id="F4">
<caption>
<p><bold>Figure 4:</bold> Feedback events across languages: Most feedback events consist of or comprise a non-manual element.</p>
</caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="glossa-11-18539-g4.png"/>
</fig>
<table-wrap id="T5">
<caption>
<p><bold>Table 5:</bold> Frequencies of feedback event compositions across languages.</p>
</caption>
<table>
<tbody>
<tr>
<td align="left" valign="top"><bold>F<sc>eedback composition</sc></bold></td>
<td align="left" valign="top"><bold>DGS</bold></td>
<td align="left" valign="top"><bold>RSL</bold></td>
<td align="left" valign="top"><bold>GER</bold></td>
<td align="left" valign="top"><bold>RUS</bold></td>
</tr>
<tr>
<td align="left" valign="top">Talk only</td>
<td align="left" valign="top">5 (0.85%)</td>
<td align="left" valign="top">1 (0.3%)</td>
<td align="left" valign="top">81 (15%)</td>
<td align="left" valign="top">16 (4%)</td>
</tr>
<tr>
<td align="left" valign="top">Manual gesture only</td>
<td align="left" valign="top">1 (0.15%)</td>
<td align="left" valign="top">0</td>
<td align="left" valign="top">0</td>
<td align="left" valign="top">0</td>
</tr>
<tr>
<td align="left" valign="top">Talk plus non-manual</td>
<td align="left" valign="top">118 (20%)</td>
<td align="left" valign="top">68 (17%)</td>
<td align="left" valign="top">258 (49%)</td>
<td align="left" valign="top">158 (38%)</td>
</tr>
<tr>
<td align="left" valign="top">Manual gesture plus non-manual</td>
<td align="left" valign="top">35 (6%)</td>
<td align="left" valign="top">16 (4%)</td>
<td align="left" valign="top">1 (0.2%)</td>
<td align="left" valign="top">1 (0.2%)</td>
</tr>
<tr>
<td align="left" valign="top">Non-manual only</td>
<td align="left" valign="top">426 (73%)</td>
<td align="left" valign="top">312 (79%)</td>
<td align="left" valign="top">185 (35%)</td>
<td align="left" valign="top">244 (58%)</td>
</tr>
</tbody>
</table>
</table-wrap>
<p><xref ref-type="fig" rid="F4">Figure 4</xref> also shows a higher frequency of <italic>talk</italic> for the spoken languages in comparison to the sign languages, particularly in German. However, most feedback events that contain <italic>talk</italic> also contain a non-manual element. We can also observe that feedback events consisting of <italic>talk</italic> alone are virtually absent in the two sign languages.<xref ref-type="fn" rid="n13">13</xref> In the two spoken languages, they exist but are relatively rare, German showing the highest proportion with roughly 15%.</p>
<p>In our data, we furthermore observe that the head is the most pervasively used articulator in all four languages. In <xref ref-type="fig" rid="F5">Figure 5</xref>, the relative frequencies of articulators employed in feedback events are visualized. Each panel shows a language, with lines representing individual signers/speakers. On the x-axis, the different articulators are placed. The y-axis captures the proportion of feedback events containing the articulator.</p>
<fig id="F5">
<caption>
<p><bold>Figure 5:</bold> Coordinate plot of articulators involved in feedback: Head is the most frequent articulator in all languages. Each line represents an individual interactant.</p>
</caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="glossa-11-18539-g5.png"/>
</fig>
<p><xref ref-type="fig" rid="F5">Figure 5</xref> shows that in all languages, the head is the most employed articulator. For the spoken languages, the second most employed articulator is <italic>talk</italic>, followed by mouth gestures. For the sign languages, in contrast, mouth gestures are more pervasively used than <italic>talk</italic>. Moreover, in the two sign languages we can observe a high variability in the employment of <italic>talk</italic>, eyebrow, and nose signals. The use of manual gestures and eyes is somewhat more pervasive in sign than in spoken languages, but generally low. Cheeks and shoulders are only very seldomly mobilized to formulate feedback events across the four languages. In sum, the relative rankings of articulators in <xref ref-type="fig" rid="F5">Figure 5</xref> suggest that we are dealing with quantitative rather than fully qualitative differences between sign and spoken languages.</p>
<p>Regarding the actual shape of the head movements, these are also relatively similar across languages. In <xref ref-type="table" rid="T6">Table 6</xref>, the frequencies of multiple and single nods as well as other head movements are summarized. It can be observed that across languages, multiple nods constitute the most frequent type of head movement. Taken together, multiple and single nods account for the largest part of head movements during feedback across languages.</p>
<table-wrap id="T6">
<caption>
<p><bold>Table 6:</bold> Frequencies of different head movements across languages.</p>
</caption>
<table>
<tbody>
<tr>
<td align="left" valign="top"><bold>L<sc>ang</sc></bold>.</td>
<td align="left" valign="top"><bold>M<sc>ultiple nods</sc></bold></td>
<td align="left" valign="top"><bold>S<sc>ingle nod</sc></bold></td>
<td align="left" valign="top"><bold>O<sc>ther head movements</sc></bold></td>
<td align="left" valign="top"><bold>T<sc>otal</sc></bold></td>
</tr>
<tr>
<td align="left" valign="top">DGS</td>
<td align="left" valign="top">313 (61%)</td>
<td align="left" valign="top">53 (10%)</td>
<td align="left" valign="top">147 (29%)</td>
<td align="left" valign="top">513</td>
</tr>
<tr>
<td align="left" valign="top">RSL</td>
<td align="left" valign="top">251 (65%)</td>
<td align="left" valign="top">79 (21%)</td>
<td align="left" valign="top">55 (14%)</td>
<td align="left" valign="top">385</td>
</tr>
<tr>
<td align="left" valign="top">GER</td>
<td align="left" valign="top">183 (52%)</td>
<td align="left" valign="top">94 (27%)</td>
<td align="left" valign="top">73 (21%)</td>
<td align="left" valign="top">350</td>
</tr>
<tr>
<td align="left" valign="top">RUS</td>
<td align="left" valign="top">249 (70%)</td>
<td align="left" valign="top">61 (17%)</td>
<td align="left" valign="top">45 (13%)</td>
<td align="left" valign="top">355</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>In order to investigate the similarity of feedback event configurations across languages, <xref ref-type="table" rid="T7">Table 7</xref> shows some of the most frequent signal combinations from our data and their frequencies in the four languages. Multiple head nods without any other additional signal constitute the most frequently employed feedback event configuration in all four languages. In the spoken languages, this is followed by multiple head nods combined with a <italic>yes</italic>-like element (e.g., equivalents of <italic>yes</italic> or <italic>mhm</italic>). This combination does not play such a large role in the two sign languages, which, in contrast, have as their second most frequent configuration a single nod without any further signal. A further combination that is employed in all languages with some frequency is multiple head nods combined with a closed mouth smile. Regarding the use of a <italic>yes</italic>-like talk element without further signals, only speakers of spoken German reach a proportion above 10%. In sum, a head nod is the most pervasive head movement during feedback across languages (<xref ref-type="table" rid="T6">Table 6</xref>). <italic>Talk</italic>, e.g. in the form of a <italic>yes</italic>-like talk element, plays a more important role in the spoken than in the sign languages.</p>
<table-wrap id="T7">
<caption>
<p><bold>Table 7:</bold> Most frequent signal combinations across languages.</p>
</caption>
<table>
<tbody>
<tr>
<td align="left" valign="top"><bold>F<sc>eedback event design</sc></bold></td>
<td align="left" valign="top"><bold>DGS</bold></td>
<td align="left" valign="top"><bold>RSL</bold></td>
<td align="left" valign="top"><bold>GER</bold></td>
<td align="left" valign="top"><bold>RUS</bold></td>
<td align="left" valign="top"><bold>T<sc>otal</sc></bold></td>
</tr>
<tr>
<td align="left" valign="top">Multiple head nods</td>
<td align="left" valign="top">111 (19%)</td>
<td align="left" valign="top">145 (37%)</td>
<td align="left" valign="top">77 (15%)</td>
<td align="left" valign="top">118 (28%)</td>
<td align="left" valign="top">451 (23%)</td>
</tr>
<tr>
<td align="left" valign="top">Multiple head nods combined with <italic>yes</italic>-like talk element</td>
<td align="left" valign="top">16 (3%)</td>
<td align="left" valign="top">16 (4%)</td>
<td align="left" valign="top">75 (14%)</td>
<td align="left" valign="top">74 (18%)</td>
<td align="left" valign="top">181 (9%)</td>
</tr>
<tr>
<td align="left" valign="top">Single head nod</td>
<td align="left" valign="top">24 (4%)</td>
<td align="left" valign="top">55 (14%)</td>
<td align="left" valign="top">29 (6%)</td>
<td align="left" valign="top">28 (7%)</td>
<td align="left" valign="top">136 (7%)</td>
</tr>
<tr>
<td align="left" valign="top"><italic>yes</italic>-like talk element</td>
<td align="left" valign="top">2 (&lt;1%)</td>
<td align="left" valign="top">0</td>
<td align="left" valign="top">67 (13%)</td>
<td align="left" valign="top">15 (4%)</td>
<td align="left" valign="top">84 (4%)</td>
</tr>
<tr>
<td align="left" valign="top">Multiple head nods with closed mouth smile</td>
<td align="left" valign="top">14 (2%)</td>
<td align="left" valign="top">23 (6%)</td>
<td align="left" valign="top">12 (2%)</td>
<td align="left" valign="top">22 (5%)</td>
<td align="left" valign="top">71 (4%)</td>
</tr>
<tr>
<td align="left" valign="top">Single nod combined with <italic>yes</italic>-like talk element</td>
<td align="left" valign="top">3 (&lt;1%)</td>
<td align="left" valign="top">1 (&lt;1%)</td>
<td align="left" valign="top">43 (8%)</td>
<td align="left" valign="top">17 (4%)</td>
<td align="left" valign="top">64 (3%)</td>
</tr>
<tr>
<td align="left" valign="top">Total feedback events</td>
<td align="left" valign="top">585</td>
<td align="left" valign="top">397</td>
<td align="left" valign="top">525</td>
<td align="left" valign="top">419</td>
<td align="left" valign="top">1926</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>While a more detailed investigation of the exact shapes of feedback events is left for future research, <xref ref-type="table" rid="T8">Table 8</xref> offers a glimpse of the frequencies of some of the most frequent non-manual signals (excluding head movements). The table lists absolute frequencies and the percentage of feedback events that contain the signal (alone or combined with other signals) per language.</p>
<table-wrap id="T8">
<caption>
<p><bold>Table 8:</bold> Frequencies of some of the most frequent non-manual signals, excluding head movements.</p>
</caption>
<table>
<tbody>
<tr>
<td align="left" valign="top"><bold>S<sc>ignal</sc></bold></td>
<td align="left" valign="top"><bold>DGS</bold></td>
<td align="left" valign="top"><bold>RSL</bold></td>
<td align="left" valign="top"><bold>GER</bold></td>
<td align="left" valign="top"><bold>RUS</bold></td>
</tr>
<tr>
<td align="left" valign="top">Eyebrow raise</td>
<td align="left" valign="top">107 (18%)</td>
<td align="left" valign="top">44 (11%)</td>
<td align="left" valign="top">20 (4%)</td>
<td align="left" valign="top">15 (4%)</td>
</tr>
<tr>
<td align="left" valign="top">Closed mouth smile</td>
<td align="left" valign="top">72 (12%)</td>
<td align="left" valign="top">38 (10%)</td>
<td align="left" valign="top">43 (8%)</td>
<td align="left" valign="top">70 (17%)</td>
</tr>
<tr>
<td align="left" valign="top">Laugh</td>
<td align="left" valign="top">9 (1.5%)</td>
<td align="left" valign="top">9 (2%)</td>
<td align="left" valign="top">35 (7%)</td>
<td align="left" valign="top">20 (5%)</td>
</tr>
<tr>
<td align="left" valign="top">Nose wrinkle</td>
<td align="left" valign="top">48 (8%)</td>
<td align="left" valign="top">1 (&lt;0.5%)</td>
<td align="left" valign="top">3 (0.5%)</td>
<td align="left" valign="top">2 (0.5%)</td>
</tr>
<tr>
<td align="left" valign="top">T<sc>otal</sc></td>
<td align="left" valign="top">585</td>
<td align="left" valign="top">397</td>
<td align="left" valign="top">525</td>
<td align="left" valign="top">419</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec>
<title>5.2 Signers and speakers employ a range of feedback styles</title>
<p>To tackle the variability between signers/speakers of the same language that already becomes apparent in <xref ref-type="fig" rid="F5">Figure 5</xref>, we created a heatmap dendrogram (<xref ref-type="fig" rid="F6">Figure 6</xref>). The numbers in the cells indicate the proportion of feedback events that contain a signal from the respective articulator for the particular interactant. This includes both signals produced on their own or in combination with other signals. The dark color stands for a high percentage of use of a certain articulator by a given interactant, while the light color indicates a low percentage. For instance, the speaker GER3 represented by the first line in the graph employs head movements in 62% of all feedback events she produces. 51% of her feedback events contain a mouth gesture, and 72% contain <italic>talk</italic>.</p>
<p>Moreover, the heatmap dendrogram reveals how similar the different articulators and the interlocutors are to each other. This allows us to investigate whether feedback producers of the same language, the same modality, or the same cultural background will cluster together. Based on the calculation of the ideal number of clusters (see Section 4.3), the interactants in our sample form three basic clusters. These three clusters are separated from each other in the graph for better visibility.</p>
<fig id="F6">
<caption>
<p><bold>Figure 6:</bold> Heatmap dendrogram: Interactants form three clusters representing three different feedback styles.</p>
</caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="glossa-11-18539-g6.png"/>
</fig>
<p>The heatmap dendrogram in <xref ref-type="fig" rid="F6">Figure 6</xref> echoes our findings from <xref ref-type="fig" rid="F5">Figure 5</xref> above: The head is the most pervasively employed articulator, followed by mouth gesture and <italic>talk</italic>. These articulators also play a role in distinguishing the three largest clusters of interactants in the data: There is one group of interactants (containing all speakers of German and two speakers of Russian) that mostly employ these three articulators. These speakers show a higher proportion of feedback events containing <italic>talk</italic>, and many of them somewhat lower values for head. The second group, containing five signers of RSL, two signers of DGS, and three speakers of Russian, is defined by particularly high percentages of head movements, and comparatively low percentages for all other articulators. The third group, featuring four signers of DGS, one signer of RSL, and one speaker of Russian is characterized by the employment of a more variable and broader range of non-manual articulators, showing higher values for mouth gesture, eyebrows, and eyes. We propose that the clusters in <xref ref-type="fig" rid="F6">Figure 6</xref> manifest three different feedback styles: A style that relies more on <italic>talk</italic> and less on head movements; a style that relies mostly on head movements; and a style that employs a broader range of non-manual articulators (for discussion see Section 6.2.).</p>
<p><xref ref-type="fig" rid="F6">Figure 6</xref> shows a clustering according to language, which is however only partial and fuzzy. The speakers of German cluster together due to their higher frequency of the <italic>talk</italic> articulator, whereas most signers of DGS cluster together due to their higher rates of mouth gesture, eyebrows and eyes. The middle group, however, defined by its reliance on very high percentages of head movements, is composed of most RSL signers, but also contains speakers of Russian and signers of DGS. This shows that, while on average speakers of the spoken languages tend to rely on <italic>talk</italic> more than the sign languages, and some signers (mostly those of DGS) tend to rely on a broader variety of non-manual articulators, these two possibilities constitute two extremes on a scale of reliance on <italic>talk</italic> and visual multi-articulator expression. In between, we find signers and speakers who rely on head movements to a large extent. Moreover, in <xref ref-type="fig" rid="F6">Figure 6</xref> we can observe that cultural background does not seem to play a major role in the clustering, as there is a complete separation between DGS signers and German speakers. In all three clusters, we find interactants from both cultural backgrounds.</p>
<p>These observations are also reflected in the number of articulators employed in feedback events. In <xref ref-type="fig" rid="F7">Figure 7</xref>,<xref ref-type="fn" rid="n14">14</xref> we can observe that signers of DGS show higher percentages of feedback events with three or four articulators involved (and consequently less with one articulator only). This fits very well with the observation that most DGS signers form part of the group employing the multi-articulator style in <xref ref-type="fig" rid="F6">Figure 6</xref>. German speakers, in contrast, show a relatively high percentage of feedback events with two articulators, which fits the high rates of <italic>talk</italic> while still maintaining head as the most important articulator for those speakers in <xref ref-type="fig" rid="F6">Figure 6</xref>. Moreover, we examined all feedback events in our data sample to determine how many consisted of a single signal versus multiple signals. Overall, multiple-signal events (n = 1,061) occur more frequently than single-signal events (n = 865), which supports our holistic approach to feedback.<xref ref-type="fn" rid="n15">15</xref></p>
<fig id="F7">
<caption>
<p><bold>Figure 7:</bold> Proportions of feedback events composed of one to six signals across the four languages. Single- vs. multiple-signal events counts are as follows: DGS (202 / 383), GER (230 / 295), RSL (227 / 170), RUS (206 / 213). In total, 865 events contain a single signal and 1,061 contain multiple signals.</p>
</caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="glossa-11-18539-g7.png"/>
</fig>
<p>Furthermore, our data give a first hint at intra-speaker variability. The Russian speaker (RUS-1) who features in two of our recordings&#8212;one with a previously unknown person (where RUS1 is indicated as RUS1-0) and one with a person she knows (RUS1-1)&#8212;is assigned to two different groups, which means that she employs different feedback styles in the two conversations. In the conversation with the person she knows, she is in the group employing a broader range of non-manual signals, basically due to the higher percentage of mouth gestures. In the conversation with the stranger, she is in the group mostly relying on head movements, showing a much larger percentage of head movements (0.93 vs. 0.59). This intra-individual difference cannot be explained by interpersonal alignment (<xref ref-type="bibr" rid="B102">Rasenberg et al. 2020</xref>), as the two Russian speakers with whom she interacts are both in the group relying more on talk (RUS2 and RUS4). This finding suggests that it will be promising to put a focus on intra-individual variation in the future, paying attention to feedback styles in different situations.</p>
<p>Lastly, different types of feedback signals are clearly associated with different feedback functions, which is very likely to in turn contribute to the emergence of the feedback styles we observe.<xref ref-type="fn" rid="n16">16</xref> If an interlocutor follows a narration by the other interactant providing mostly continuers as feedback, their feedback style will differ from a situation where the same interactant is participating in a very involved way, responding with higher amounts of assessments that offer the interactant&#8217;s subjective evaluation, or of newsmarks that index the remarkability of information (<xref ref-type="bibr" rid="B86">Marmorstein &amp; Szczepek Reed 2023</xref>). In the latter case, we can expect higher proportions of facial expressions such as smiles or laughter (assessments) or eyebrow raises (newsmarks). Feedback styles, then, are partly shaped by feedback functions that feature prominently in the particular interaction. Future research will therefore examine feedback functions and their correlations with both the feedback signals and the feedback styles proposed here. In addition, we aim to control for the content of each conversation in order to more fully disentangle the influence of feedback function from other factors influencing the distributional patterns observed, such as community-wide conventions and personal preferences.</p>
</sec>
</sec>
<sec>
<title>6 Discussion and theoretical implications</title>
<sec>
<title>6.1 The multimodal and multi-channel nature of feedback</title>
<p>In this paper, we compare the formulation of feedback events in four languages&#8212;two signed and two spoken&#8212;while controlling for cultural background. Our data replicate earlier findings, showing that feedback exhibits some variability across languages, individuals, and can also vary for a single individual according to the situation.</p>
<p>In addition, our data suggest that language modality does play a role in the relative ranking of the different articulators in the composition of feedback events. While <italic>talk</italic> (i.e., manual signs and mouthing) is available to and employed by signers to formulate feedback events, it is used less pervasively than <italic>talk</italic> (i.e., spoken words and vocalizations) in spoken languages. However, the head emerges as the most pervasively used articulator for formulating feedback events across all languages in our sample, with multiple head nods without any accompanying signals constituting the most frequent feedback configuration. Our findings are thus consistent with previous findings on conversational feedback and the proposal of a shared conversational infrastructure for feedback and social interaction in general (<xref ref-type="bibr" rid="B82">Lutzenberger et al. 2024</xref>).</p>
<p>This infrastructure is inherently multimodal and multi-channel, involving multiple articulators in both sign and spoken languages.</p>
<p>These findings warrant an explanation. What are the advantages of a multimodal and multi-channel system for communication? We would like to suggest that an inherently multimodal and multi-channelled infrastructure for feedback (and conversation in general) allows signers and speakers to solve at least three conversational problems. First, the use of head movements and/or other visual signals allows for providing feedback without intruding upon the interlocutor&#8217;s turn, in line with earlier proposals (<xref ref-type="bibr" rid="B38">Dingemanse et al. 2022</xref>; <xref ref-type="bibr" rid="B24">B&#246;rstell 2024</xref>). Small multiple head nods can be produced in overlap with an ongoing turn without interrupting the current signer or speaker (whereas a manual gesture might be interpreted as an attempt to take the floor) (see Bauer et al. (<xref ref-type="bibr" rid="B10">2024</xref>) for kinematic properties of feedback head nods). Second, the possibility of producing visual signals in overlap with the interlocutor&#8217;s turn also comes with the opportunity of early signaling which action a signer or speaker is intending to perform in the upcoming turn, which may help the interlocutor to identify that action more quickly (<xref ref-type="bibr" rid="B63">Holler 2025</xref>). Third, a multimodal infrastructure for feedback allows interactants to evade linearity and thus flexibly express different meanings, including potentially the expression of different interactional functions at the same time (e.g., in the form of a head nod functioning as a continuer combined with a signed or spoken lexical assessment). A single-channel infrastructure, in contrast, implies a linear delivery, allowing for meanings to be expressed only consecutively rather than in parallel. The multimodal and multi-channel infrastructure for conversation thus provides signers and speakers with a system that is much more flexible than a single-channel infrastructure could be.</p>
<p>In addition to offering solutions to these three conversational problems, we suggest that the multimodal infrastructure also raises the question of who is the beneficiary of feedback. What is usually emphasized is the function of feedback of offering interpretable information to the current signer/speaker. Under this account, the recipient of the feedback is the sole or main beneficiary. But the multimodal infrastructure questions this position, as head nods may not only indicate to the addressee that the producer of the nod is still attentive to their talk. Rather, we suggest that it may potentially afford processing benefits for the person who produces the nod. Furthermore, while it is clear that facial gestures in conversation are not mere emotional expressions but rather serve pragmatic and interactional functions (<xref ref-type="bibr" rid="B14">Bavelas &amp; Chovil 2018</xref>), this does by no means imply that they may not in addition constitute expressions of emotion that allow the speaker to regulate their emotional state. This is particular relevant in feedback, as feedback is produced <italic>in reaction</italic> to the interlocutor&#8217;s turn. Here, emotional reactions may coincide with the pragmatic function to be conveyed, e.g., a state of surprise leading to a reaction in the form of raised eyebrows may coincide with the the pragmatic function of indicating that the information provided by the interlocutor is conceived as surprising. For manual gestures, it is well established that they are also produced when they cannot be seen by the interlocutor (<xref ref-type="bibr" rid="B15">Bavelas et al. 1992</xref>; <xref ref-type="bibr" rid="B66">Iverson &amp; Goldin-Meadow 1998</xref>; <xref ref-type="bibr" rid="B90">Mol et al. 2011</xref>). This suggests that their production may potentially not only serve the addressee, but could also impact the cognitive processes of the producer (<xref ref-type="bibr" rid="B50">Goldin-Meadow &amp; Beilock 2010</xref>). However, the potential role of non-manual gestures such as head nods in altering the producer&#8217;s cognitive processes has yet to be explored in future investigations (<xref ref-type="bibr" rid="B92">Mori et al. 2022</xref>). Our data strongly suggest that this will be a worthwhile path.</p>
</sec>
<sec>
<title>6.2 Modelling multimodal feedback styles</title>
<p>Based on our analysis using the heatmap dendrogram (<xref ref-type="fig" rid="F6">Figure 6</xref>), we identified three distinct feedback styles in our data. These styles differ in the relative prominence of various articulators used during feedback events. These findings are compatible with a shared interactional infrastructure for feedback among sign and spoken languages (<xref ref-type="bibr" rid="B82">Lutzenberger et al. 2024</xref>), where, however, interactants can choose among different feedback styles. These styles, we argue, can be ordered on a scale based on the pervasiveness of the different signals, as shown in the heatmap in <xref ref-type="fig" rid="F8">Figure 8</xref>. This heatmap offers a summary of the three feedback styles identified, where each tile in the heatmap shows, per channel, the mean of the proportions for all interactants that were classified to belong to that style in <xref ref-type="fig" rid="F6">Figure 6</xref>. Dark tiles indicate a high mean proportion, light-color tiles a low mean proportion. We chose to build this figure with the mean proportion in order to provide some potentially generalizable observations on the three feedback styles.</p>
<fig id="F8">
<caption>
<p><bold>Figure 8:</bold> Heatmap of feedback styles identified with mean articulator proportions.</p>
</caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="glossa-11-18539-g8.png"/>
</fig>
<p>The mean proportions in <xref ref-type="fig" rid="F8">Figure 8</xref> suggest that feedback styles may form a gradient pivoting around the head articulator. In the Head-dominant style in the middle, the head is the most prominent articulator. In the Talk-oriented style, talk becomes more frequent, whereas the proportion of head movements drops. In the Face-oriented style, finally, other non-manual articulators are used more often, while head also drops somewhat. This suggests that when articulators other than the head become more pervasive in the feedback of an interactant, head movements become less prominent. We propose that from this observation we can derive a model that provides testable predictions for future research on feedback styles.</p>
<p>In <xref ref-type="fig" rid="F9">Figure 9</xref>, we visualize the proposed model and summarize its predictions regarding the distribution of articulators in feedback styles. The three styles in the middle represent the styles we discovered in this paper. The means are extrapolated from <xref ref-type="fig" rid="F8">Figure 8</xref>. As the proportion of feedback events containing head movements must decrease when other articulators (such as talk or non-manual gestures) become more prominent, the model predicts the theoretical existence of two additional feedback styles: one that is Talk-dominant, and another that is Face-dominant. These two styles represent the hypothetical endpoints of the model&#8217;s continuum.</p>
<fig id="F9">
<caption>
<p><bold>Figure 9:</bold> Heatmap visualizing the model with observed and predicted feedback styles.</p>
</caption>
<graphic xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="glossa-11-18539-g9.png"/>
</fig>
<p>Importantly, unlike the three empirically observed styles in our data, these endpoint styles are theoretical projections&#8212;they are not based on actual measured data but are included to illustrate the model&#8217;s full conceptual range. To our knowledge, such extreme styles have not yet been explicitly described in the literature, although some preliminary observations (e.g., on Yurakar&#233; and Y&#233;l&#238; Dnye<xref ref-type="fn" rid="n17">17</xref>) hint at their potential existence.</p>
<p>The model also predicts that no viable feedback style should exhibit simultaneously high proportions of all three articulator types (i.e., talk, head, and other non-manuals). We speculate that such an all-dominant configuration would likely be inefficient or cognitively taxing for both the producer and the addressee to process, as activating and integrating signals across all available channels might overload the interactional system rather than enhance it. This suggests that efficient feedback systems may rely on channel selection and modulation, rather than maximal channel activation.</p>
<p>For testing these model predictions and the adequacy of the model for new data, it will be crucial to conduct further studies with pairs of sign and spoken languages matched for cultural background. In addition, it will be relevant to investigate languages in non-(Indo-)European contexts in order to arrive at a more diverse picture.</p>
<p>The conceptualization of the feedback space in terms of a multi-dimensional gradient as visualized in <xref ref-type="fig" rid="F9">Figure 9</xref> has another advantage. In addition to an analysis of inter-individual variation as in <xref ref-type="fig" rid="F6">Figure 6</xref>, it allows for an investigation and theorization of intra-individual variation. It is quite probable that signers and speakers are capable to adjust their feedback style according to the type and topic of the conversation, their interlocutor, degree of familiarity and thus the extent of shared common ground and even their personal physical and mental state at the time of the conversation. As another testable hypothesis, we propose that signers or speakers will vary along these lines, and that their styles will correspond to those we found in this paper, summarized in <xref ref-type="fig" rid="F9">Figure 9</xref>. Of course, it is well possible that other styles will be discovered once more languages&#8212;including non-(Indo-) European ones&#8212;are investigated and larger samples with more interactants form the basis of our investigations, hopefully made possible through automatized annotations.</p>
</sec>
<sec>
<title>6.3 Towards a multimodal and interactional theory of the Language Faculty</title>
<p>Our results furthermore have important implications for the conceptualization of conversational concepts, as well as for theories of the Language Faculty. Specifically, they demonstrate the dominant role of non-manual signals&#8212;such as head movements, facial gestures, and other visual signals&#8212;in the production of feedback. This challenges the prevailing emphasis on verbal feedback in accounts of interactional phenomena, which have largely neglected the visual dimension of communication.</p>
<p>Recent research, making use of advances in recording and experimental methodologies, has significantly deepened our understanding of co-present interaction, revealing the fundamentally multimodal nature of human communication (<xref ref-type="bibr" rid="B52">Gregori et al. 2023</xref>; <xref ref-type="bibr" rid="B57">Henlein et al. 2024</xref>). While linguists increasingly recognize the multimodal foundation of language and interaction (<xref ref-type="bibr" rid="B98">Perniss 2018</xref>; <xref ref-type="bibr" rid="B64">Holler &amp; Levinson 2019</xref>; <xref ref-type="bibr" rid="B97">&#214;zy&#252;rek 2021</xref>; <xref ref-type="bibr" rid="B101">Rasenberg et al. 2022</xref>; <xref ref-type="bibr" rid="B55">Hamilton &amp; Holler 2023</xref>; <xref ref-type="bibr" rid="B73">Kendrick et al. 2023</xref>; <xref ref-type="bibr" rid="B106">Sandler 2024</xref>), fundamental theoretical concepts, such as conversational turns and interactional feedback mechanisms, are often theorized in unimodal terms, leaving aside their inherently multimodal character.</p>
<p>We propose a shift in this perspective: these concepts should be theorized as multimodal from the outset as also currently suggested by Holler (<xref ref-type="bibr" rid="B63">2025</xref>) for &#8216;social action&#8217;. For instance, if a hand gesture precedes speech, why should the conversational turn be defined as beginning with the onset of the first syllable?</p>
<p>We argue that the boundaries of conversational turns&#8212;both their beginnings and endings&#8212;cannot be defined solely by spoken or signed items (see also <xref ref-type="bibr" rid="B73">Kendrick et al. 2023</xref>). Manual and non-manual gestures should instead be regarded as intrinsic components of conversational turns, as we have demonstrated for feedback in this study. A single head nod or head tilt, alone or in combination with other non-manual signals (e.g., widened eyes, eyebrow movements) or with manual gestures (e.g., palm-up gestures), constitutes an integral element of a feedback event.</p>
<p>Our results furthermore reinforce the urgent need for models of language and the Language Faculty that engage with the inherently multimodal nature of human communication. The Multimodal Language Faculty (MLF) model, a cognitive framework recently developed by Cohn &amp; Schilperoord (<xref ref-type="bibr" rid="B34">2024</xref>), aims to account for both unimodal and multimodal language use, as well as other forms of communication across various modalities. While this model shows considerable flexibility, particularly through its proposed Multimodal Parallel Architecture, it remains unclear how interaction and dialogue are formally represented, as these components are not explicitly addressed in the current formulation of MLF. Our data highlight the need to extend such models to incorporate fundamentally interactional phenomena such as feedback, where the focus is not primarily on truth-conditional meaning but rather on interactional function. In contrast, the Interactional Spine Hypothesis (<xref ref-type="bibr" rid="B130">Wiltschko 2021</xref>) offers a highly detailed theoretical account of interaction, providing new perspectives on several interactional mechanisms including responsive actions. However, this model does not currently provide a framework for incorporating meaningful visual signals as core interactional phenomena. Extending this model to incorporate visual signals will be a fruitful path in the future, as the model explicitly predicts multi-functionality of linguistic items and is thus very well-suited for the integration of multi-functional non-manual signals. Taken together, our results suggest that visual, non-manual signals are an integral component of human interaction. We are therefore in urgent need of theoretical models that integrate both the visual and interactional dimensions of human language, moving beyond speech-centered paradigms.</p>
</sec>
</sec>
<sec>
<title>7 Conclusion and future work</title>
<p>This study explored the composition of feedback events across four languages and two language modalities, employing a novel cross-linguistic and cross-modal approach that considers the full constellation of communicative resources used to provide feedback. The findings reveal that non-manual signals are fundamental to conversational interaction. This has significant implications for linguistic theory, suggesting the need to move beyond purely speech-based models. A reconceptualization is required to account for the multimodal nature of interaction, as speech alone does not provide a complete picture of how communication is performed.</p>
<p>Our results also point to the importance of examining intra-individual variation in feedback. In future studies, we plan to compare interactions between familiar and unfamiliar conversation partners. By employing eye-tracking devices during data collection, we aim to measure (mutual) eye gaze during, before, and after feedback events, allowing for a more nuanced understanding of the role of gaze behaviors in feedback. The inclusion of eye gaze as a key component of feedback will be a significant focus of our future work, addressing the limitations in this study where annotating gaze behavior manually was challenging.</p>
<p>Additionally, we recognize the absence of a prosodic analysis of verbal feedback in this study, as we focused primarily on articulators. Future research will aim to incorporate prosodic features alongside gaze, offering a more comprehensive understanding of feedback mechanisms.</p>
<p>Expanding the research to include a wider variety of linguistic and cultural contexts could yield valuable insights. For example, in Bulgarian communication, agreement or affirmation is often signaled through a lateral head movement. Investigating whether these gestures influence the use of head movement during feedback interactions would be an interesting area for future research. Moreover, examining contexts where direct gaze is culturally less common, such as among speakers of Tzeltal (Mayan) (<xref ref-type="bibr" rid="B103">Rossano et al. 2009</xref>), could broaden our understanding of feedback mechanisms. Studies like these will help to illuminate how different linguistic communities navigate feedback, offering a richer, cross-cultural perspective on multimodal interactional strategies.</p>
<p>Through this research, we contribute to the theoretical framework surrounding multimodal feedback, advancing the understanding of feedback mechanisms within diverse linguistic and interactional contexts. The proposed model of feedback styles is a first attempt to understand how interlocutors navigate social interaction by adjusting their multimodal behaviour. Studies like ours pave the way toward a more comprehensive understanding of how multimodal turns operate across different languages, helping to illuminate the universal and variable aspects of feedback in human communication.</p>
</sec>
</body>
<back>
<sec>
<title>Appendix: Coding scheme for annotation of multimodal feedback events</title>
<p>The following table presents our annotation scheme for multimodal feedback in signed and spoken interactions. While developing this scheme, we drew inspiration from prior literature, incorporating certain labels and abbreviations (e.g., head and mouth gestures: Burkova (<xref ref-type="bibr" rid="B27">2015</xref>), <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="http://rsl.nstu.ru/">http://rsl.nstu.ru/</ext-link>; smiles and laughter: Smiling Intensity Scale, Gironzetti et al. (<xref ref-type="bibr" rid="B49">2016</xref>); eye blinks: H&#246;mke et al. (<xref ref-type="bibr" rid="B65">2017</xref>)) as well as insights from the data analyzed in this study. We excluded body movements and eye blinks from the current analysis and leave the investigation of these two features for future research.</p>
<table-wrap id="Sx1.tab1"><label/>
<table>
<tbody>
<tr>
<td align="left" valign="top"><bold>A<sc>bbreviation</sc></bold></td>
<td align="left" valign="top"><bold>M<sc>eaning</sc></bold></td>
</tr>
<tr>
<td align="left" valign="top" colspan="2"><italic>Tier mouth gesture</italic></td>
</tr>
<tr>
<td align="left" valign="top">lbt</td>
<td align="left" valign="top">biting of the lower lip</td>
</tr>
<tr>
<td align="left" valign="top">ldn</td>
<td align="left" valign="top">corners of the mouth lowered down</td>
</tr>
<tr>
<td align="left" valign="top">ldr</td>
<td align="left" valign="top">lips sucked in</td>
</tr>
<tr>
<td align="left" valign="top">lo</td>
<td align="left" valign="top">lips rounded</td>
</tr>
<tr>
<td align="left" valign="top">lpd</td>
<td align="left" valign="top">lower lip pushed forward</td>
</tr>
<tr>
<td align="left" valign="top">lpf</td>
<td align="left" valign="top">lips pushed forward</td>
</tr>
<tr>
<td align="left" valign="top">lp</td>
<td align="left" valign="top">lips pressed together</td>
</tr>
<tr>
<td align="left" valign="top">lvb</td>
<td align="left" valign="top">lips tremble</td>
</tr>
<tr>
<td align="left" valign="top">mbl</td>
<td align="left" valign="top">blowing out air</td>
</tr>
<tr>
<td align="left" valign="top">mo</td>
<td align="left" valign="top">mouth open</td>
</tr>
<tr>
<td align="left" valign="top">msc</td>
<td align="left" valign="top">sucking in air</td>
</tr>
<tr>
<td align="left" valign="top">tch</td>
<td align="left" valign="top">tongue against the cheek</td>
</tr>
<tr>
<td align="left" valign="top">tt</td>
<td align="left" valign="top">tongue out</td>
</tr>
<tr>
<td align="left" valign="top">cms</td>
<td align="left" valign="top">closed mouth smile (s1)</td>
</tr>
<tr>
<td align="left" valign="top">oms</td>
<td align="left" valign="top">open mouth smile (s2)</td>
</tr>
<tr>
<td align="left" valign="top">woms</td>
<td align="left" valign="top">wide open mouth smile (s3)</td>
</tr>
<tr>
<td align="left" valign="top">lgh</td>
<td align="left" valign="top">laughing smile or laugh, smiling with jaw dropped (s4)</td>
</tr>
<tr>
<td align="left" valign="top" colspan="2"><italic>Tier head</italic></td>
</tr>
<tr>
<td align="left" valign="top">hnn</td>
<td align="left" valign="top">many short head nods</td>
</tr>
<tr>
<td align="left" valign="top">sn</td>
<td align="left" valign="top">small (shallow) head nod</td>
</tr>
<tr>
<td align="left" valign="top">ln</td>
<td align="left" valign="top">large head nod</td>
</tr>
<tr>
<td align="left" valign="top">lnn</td>
<td align="left" valign="top">many large nods</td>
</tr>
<tr>
<td align="left" valign="top">mn</td>
<td align="left" valign="top">mixed nod (e.g. one large nod followed by small nod(s))</td>
</tr>
<tr>
<td align="left" valign="top">hb</td>
<td align="left" valign="top">head tilt back</td>
</tr>
<tr>
<td align="left" valign="top">hbn</td>
<td align="left" valign="top">head tilt back with subsequent head nod</td>
</tr>
<tr>
<td align="left" valign="top">hs</td>
<td align="left" valign="top">head shake</td>
</tr>
<tr>
<td align="left" valign="top">hmb</td>
<td align="left" valign="top">head move backward</td>
</tr>
<tr>
<td align="left" valign="top">hmf</td>
<td align="left" valign="top">head move forward</td>
</tr>
<tr>
<td align="left" valign="top">hl</td>
<td align="left" valign="top">head turn to the left</td>
</tr>
<tr>
<td align="left" valign="top">hlb</td>
<td align="left" valign="top">head turn to the left &amp; tilted backwards</td>
</tr>
<tr>
<td align="left" valign="top">hlf</td>
<td align="left" valign="top">head turn to the left &amp; tilted forward</td>
</tr>
<tr>
<td align="left" valign="top">hr</td>
<td align="left" valign="top">head turn to the right</td>
</tr>
<tr>
<td align="left" valign="top">hrb</td>
<td align="left" valign="top">head turn to the right &amp; tilted backwards</td>
</tr>
<tr>
<td align="left" valign="top">hrf</td>
<td align="left" valign="top">head turn to the right &amp; tilted forward</td>
</tr>
<tr>
<td align="left" valign="top">hth</td>
<td align="left" valign="top">head lowering</td>
</tr>
<tr>
<td align="left" valign="top">hths</td>
<td align="left" valign="top">head lowering &amp; head shake</td>
</tr>
<tr>
<td align="left" valign="top">ht</td>
<td align="left" valign="top">head tilted to the right or left shoulder</td>
</tr>
<tr>
<td align="left" valign="top">cu</td>
<td align="left" valign="top">chin up (no head back tilt)</td>
</tr>
<tr>
<td align="left" valign="top">wig</td>
<td align="left" valign="top">head wiggle (lateral head movements to both sides, neither shake nor nod)</td>
</tr>
<tr>
<td align="left" valign="top" colspan="2"><italic>Tier eyebrows</italic></td>
</tr>
<tr>
<td align="left" valign="top">bf</td>
<td align="left" valign="top">eyebrows furrowed (=eyebrows are pulled together)</td>
</tr>
<tr>
<td align="left" valign="top">br</td>
<td align="left" valign="top">eyebrows raised</td>
</tr>
<tr>
<td align="left" valign="top">brd</td>
<td align="left" valign="top">eyebrows lowered</td>
</tr>
<tr>
<td align="left" valign="top" colspan="2">Tier eyes</td>
</tr>
<tr>
<td align="left" valign="top">mbl</td>
<td align="left" valign="top">multiple eye blinks</td>
</tr>
<tr>
<td align="left" valign="top">sbl</td>
<td align="left" valign="top">short blink (no longer than 410 ms)</td>
</tr>
<tr>
<td align="left" valign="top">lbl</td>
<td align="left" valign="top">long blink (longer than 410 ms)</td>
</tr>
<tr>
<td align="left" valign="top">esc</td>
<td align="left" valign="top">eyes squinted</td>
</tr>
<tr>
<td align="left" valign="top">ew</td>
<td align="left" valign="top">eyes wide opened</td>
</tr>
<tr>
<td align="left" valign="top" colspan="2"><italic>Tier nose</italic></td>
</tr>
<tr>
<td align="left" valign="top">nw</td>
<td align="left" valign="top">nose wrinkled</td>
</tr>
<tr>
<td align="left" valign="top">nbl</td>
<td align="left" valign="top">nose blows out air</td>
</tr>
<tr>
<td align="left" valign="top" colspan="2"><italic>Tier cheeks</italic></td>
</tr>
<tr>
<td align="left" valign="top">chp</td>
<td align="left" valign="top">cheeks blown out/puffed</td>
</tr>
<tr>
<td align="left" valign="top">chs</td>
<td align="left" valign="top">cheeks sucked in</td>
</tr>
<tr>
<td align="left" valign="top" colspan="2"><italic>Tier shoulders</italic></td>
</tr>
<tr>
<td align="left" valign="top">shf</td>
<td align="left" valign="top">shoulders curved forwards</td>
</tr>
<tr>
<td align="left" valign="top">shs</td>
<td align="left" valign="top">shoulder shrug (raising and lowering of shoulders like &#8220;I don&#8217;t know!&#8221;)</td>
</tr>
<tr>
<td align="left" valign="top" colspan="2"><italic>Tier body</italic></td>
</tr>
<tr>
<td align="left" valign="top">bb</td>
<td align="left" valign="top">body leaned backward</td>
</tr>
<tr>
<td align="left" valign="top">bf</td>
<td align="left" valign="top">body leaned forward</td>
</tr>
<tr>
<td align="left" valign="top">bu</td>
<td align="left" valign="top">body moves/raises up</td>
</tr>
<tr>
<td align="left" valign="top">bt</td>
<td align="left" valign="top">body turned to the left/to the right</td>
</tr>
<tr>
<td align="left" valign="top">bl</td>
<td align="left" valign="top">body leaned to the left/to the right (without turning)</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec>
<title>Data availability</title>
<p>Three of the corpora associated with this article are published and thus accessible, either open access (DGS) or upon request (RUS, RSL). The data set and video examples used in this study as well as the script for data analysis are available at <ext-link ext-link-type="uri" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="https://doi.org/10.6084/m9.figshare.30738701">https://doi.org/10.6084/m9.figshare.30738701</ext-link>.</p>
</sec>
<sec>
<title>Ethics and consent</title>
<p>We are using available corpora in this study. We only use corpora where informed consent was gathered from all participants. The identities of all participants have been anonymized.</p>
</sec>
<sec>
<title>Funding information</title>
<p>This research was funded by the University of Cologne Excellent Research Program, Funding line Cluster Development Program, project Language Challenges.</p>
</sec>
<sec>
<title>Acknowledgements</title>
<p>We are grateful to the signers and speakers who participated in the corpus data collections. Moreover, we would like to express our gratitude to three anonymous reviewers for their thoughtful and inspiring comments on an earlier version of this paper.</p>
<p>We thank Roman Poryadin, Undine Kuhlmann, Milena Pielen and Lina Herrmann for their assistance with annotations. We also thank our colleagues, in particular Birgit Hellwig, Pamela Perniss, Nikolaus P. Himmelmann and Alice Mitchell for their insightful comments on an earlier version of this research. We are grateful to Klaus von Heusinger for suggesting the term &#8216;feedback event&#8217;.</p>
</sec>
<sec>
<title>Competing interests</title>
<p>The authors have no competing interests to declare.</p>
</sec>
<sec>
<title>Authors&#8217; contributions</title>
<p>Conceptualization: AB, SG; Data collection: AB; Basic data annotation: AB, SG, JH, TAH; Detailed data annotation (correction, elaboration): AB, JH, SG; Supervision of student assistants: AB, SG; Data wrangling and statistical analysis: SG; Investigation: AB, SG; Writing: AB, SG. AB and SG contributed equally to this work as joint first authors. The remaining authors are listed in alphabetical order by surname.</p>
</sec>
<fn-group>
<fn id="n1"><p>We acknowledge the challenges of providing traditional conversation analysis (CA) transcripts for the sign language examples in our study and commend recent research that has adopted more innovative and visually accessible methods, such as graphic comic-style representations (<xref ref-type="bibr" rid="B114">Skedsmo 2020</xref>; <xref ref-type="bibr" rid="B115">2023</xref>). However, in our case, still images proved ineffective. The feedback signals we examine&#8212;such as slow, repeated head nods, subtle smiles, eyebrow raises or backward head movements&#8212;are often too subtle to be clearly conveyed in static images. Therefore, we have made our examples available online where possible and provide links to short video clips that illustrate the relevant feedback signals discussed in this paper. The videos of spoken German were collected in 2009 without participant consent for publication, and therefore cannot be made publicly available. Additionally, we provide multilinear notations inspired by sign language glossing conventions and using some conventions from the conversational transcription system GAT2 (<xref ref-type="bibr" rid="B112">Selting et al. 2009</xref>). In each example of dyadic interaction, the two signers/speakers are labeled as A and B. Following sign language annotation conventions, manual signs are glossed using small caps. If a single manual sign translates into multiple English words, these words are connected by hyphens. Non-manual signals are glossed with overlines, starting where the non-manual begins and ending where it ends. For DGS (German Sign Language), the glosses (as well as their translations) are based on the original corpus transcripts.</p></fn>
<fn id="n2"><p>There are certain elements of feedback that may be intentionally produced in some instances and unintentionally produced in other instances (e.g., smiles, eyebrow flashes). Here we consider all instances of such elements to be signals, and do not attempt to differentiate intentional from unintentional signals.</p></fn>
<fn id="n3"><p>Mouthings are mouth movements produced during sign language interaction which resemble words (or parts of them) from the surrounding spoken/written language(<xref ref-type="bibr" rid="B11">Bauer &amp; Kyuseva 2022</xref>).</p></fn>
<fn id="n4"><p>We follow Ameka &amp; Terkourafi (<xref ref-type="bibr" rid="B6">2019</xref>) and use the term &#8220;co-present dyadic conversation&#8221; to refer to what is often called &#8220;face-to-face&#8221; interaction. We use it to describe interactions between two people who share the same physical space, while recognizing that cultural norms, such as avoiding eye contact, can influence the shape of social interaction (<xref ref-type="bibr" rid="B103">Rossano et al. 2009</xref>).</p></fn>
<fn id="n5"><p>For more information on the contents of the category of <italic>talk</italic>, see Section 3.2.</p></fn>
<fn id="n6"><p>Some scholars propose a distinction between continuers and acknowledgments, where acknowledgments indicate agreement with or understanding of the previous turn (<xref ref-type="bibr" rid="B46">Gardner 2001: 2</xref>). In this study, we do not distinguish between continuers and acknowledgments, as they often share similar forms. Moreover, it is notoriously difficult to determine whether an interactant aims to indicate agreement or just pass on the opportunity for repair. We suggest that a future operationalization of the distinction between the two could be conceptualized on the basis of their sequential position: continuers typically follow a volunteered initial utterance (second position), while acknowledgments follow a conditionally relevant response (third position). We leave an evaluation of this proposal for future research.</p></fn>
<fn id="n7"><p>Lutzenberger et al. (<xref ref-type="bibr" rid="B82">2024</xref>) employ the term &#8216;verbal&#8217; to include manual BSL signs and spoken English words. We usetalk as an alternative to &#8216;verbal&#8217; in this context, as &#8216;verbal&#8217; typically carries connotations specific to spoken language. Moreover, we include mouthings, as these clearly convey lexical (e.g.,ja &#8216;yes&#8217;) and non-lexical content (e.g.,ah).</p></fn>
<fn id="n8"><p>Information on whether interlocutors knew each other is not publicly available in the DGS Corpus metadata. We therefore selected interactions that, based on their content, strongly suggested familiarity between the participants&#8212;for example, references to mutual friends, shared holidays, or inquiries about each other&#8217;s partners.</p></fn>
<fn id="n9"><p>We acknowledge that the Russian as well as RSL data in this study stems primarily from Russian interactants living in Germany, which may raise questions about cultural generalizability. This is a valid concern and one we carefully considered during data collection. To address this, we selected participants who use RSL/Russian in their daily lives, particularly in interactions with family and friends. For example, one signer reported communicating exclusively with RSL-signing friends. Additionally, both the RSL and spoken Russian data were collected from individuals who migrated to Germany within the past two to five years. As no multimodal corpus of dyadic interaction currently exists for spoken Russian, and the available RSL corpus from Russia (<xref ref-type="bibr" rid="B27">Burkova 2015</xref>) includes only a limited amount of free interaction, we opted to collect new data in Germany. Fieldwork in Russia was not feasible after 2022. Despite these circumstances, we believe the data remain valid and representative for our research purposes.</p></fn>
<fn id="n10"><p>The number of 300ms was chosen as Trujillo et al. (<xref ref-type="bibr" rid="B120">2018</xref>; <xref ref-type="bibr" rid="B121">2019</xref>) found it to be the approximate minimum length of time that na&#239;ve observers need to consistently identify a cessation of movement.</p></fn>
<fn id="n11"><p>Of course, manual annotations are never fully objective. However, mistakes occur in automatic annotations as well, which is why we are confident that our data offer a good representation of the design of feedback events in our corpora.</p></fn>
<fn id="n12"><p>We used the following settings: distance=&#8220;euclidean&#8221;, method=&#8220;ward.D2&#8221;.</p></fn>
<fn id="n13"><p>This is consistent with findings for responses to assertions in DGS where non-manuals are also pervasive (<xref ref-type="bibr" rid="B81">Loos et al. 2024: 445</xref>).</p></fn>
<fn id="n14"><p>The gray bars indicate the spread across all languages and interactants. Each line represents an individual.</p></fn>
<fn id="n15"><p>Some language-specific differences are visible and should be addressed in future research.</p></fn>
<fn id="n16"><p>We are grateful to an anonymous reviewer for bringing up this point.</p></fn>
<fn id="n17"><p>A preliminary investigation of a conversational corpus (<xref ref-type="bibr" rid="B124">van Gijn et al. 2011</xref>) of Yurakar&#233; (isolate, Bolivia) suggests that in this language, head movements are extremely rarely used in feedback, with speakers relying mostly on spoken elements, thus potentially representing the Talk-dominant style. The Papuan language Y&#233;l&#238; Dnye, where eye blinks and eyebrow flashes are regularly employed as continuers (<xref ref-type="bibr" rid="B79">Levinson 2015: 406</xref>), may be a candidate for the Face-dominant style.</p></fn>
</fn-group>
<ref-list>
<ref id="B1"><mixed-citation publication-type="journal"><string-name><surname>Abner</surname>, <given-names>Natasha</given-names></string-name> &amp; <string-name><surname>Cooperrider</surname>, <given-names>Kensy</given-names></string-name> &amp; <string-name><surname>Goldin-Meadow</surname>, <given-names>Susan</given-names></string-name>. <year>2015</year>. <article-title>Gesture for linguists: A handy primer</article-title>. <source>Language and Linguistics Compass</source> <volume>9</volume>(<issue>11</issue>). <fpage>437</fpage>&#8211;<lpage>451</lpage>. DOI: <pub-id pub-id-type="doi">10.1111/lnc3.12168</pub-id></mixed-citation></ref>
<ref id="B2"><mixed-citation publication-type="book"><string-name><surname>Allwood</surname>, <given-names>Jens</given-names></string-name> &amp; <string-name><surname>Cerrato</surname>, <given-names>Loredana</given-names></string-name>. <year>2003</year>. <chapter-title>A study of gestural feedback expressions</chapter-title>. In <source>First Nordic Symposium on Multimodal Communication</source>, <fpage>7</fpage>&#8211;<lpage>22</lpage>. <publisher-loc>Copenhagen</publisher-loc>: <publisher-name>Gothenburg University Publications</publisher-name>.</mixed-citation></ref>
<ref id="B3"><mixed-citation publication-type="journal"><string-name><surname>Allwood</surname>, <given-names>Jens</given-names></string-name> &amp; <string-name><surname>Cerrato</surname>, <given-names>Loredana</given-names></string-name> &amp; <string-name><surname>Jokinen</surname>, <given-names>Kristiina</given-names></string-name> &amp; <string-name><surname>Navarretta</surname>, <given-names>Costanza</given-names></string-name> &amp; <string-name><surname>Paggio</surname>, <given-names>Patrizia</given-names></string-name>. <year>2007a</year>. <article-title>The MUMIN coding scheme for the annotation of feedback, turn management and sequencing phenomena</article-title>. <source>Language Resources and Evaluation</source> <volume>41</volume>, <fpage>273</fpage>&#8211;<lpage>287</lpage>. DOI: <pub-id pub-id-type="doi">10.1007/s10579-007-9061-5</pub-id></mixed-citation></ref>
<ref id="B4"><mixed-citation publication-type="journal"><string-name><surname>Allwood</surname>, <given-names>Jens</given-names></string-name> &amp; <string-name><surname>Kopp</surname>, <given-names>Stefan</given-names></string-name> &amp; <string-name><surname>Grammer</surname>, <given-names>Karl</given-names></string-name> &amp; <string-name><surname>Ahls&#233;n</surname>, <given-names>Elisabeth</given-names></string-name> &amp; <string-name><surname>Oberzaucher</surname>, <given-names>Elisabeth</given-names></string-name> &amp; <string-name><surname>Koppensteiner</surname>, <given-names>Markus</given-names></string-name>. <year>2007b</year>. <article-title>The analysis of embodied communicative feedback in multimodal corpora: A prerequisite for behavior simulation</article-title>. <source>Language Resources and Evaluation</source> <volume>41</volume>, <fpage>255</fpage>&#8211;<lpage>272</lpage>. DOI: <pub-id pub-id-type="doi">10.1007/s10579-007-9056-2</pub-id></mixed-citation></ref>
<ref id="B5"><mixed-citation publication-type="journal"><string-name><surname>Allwood</surname>, <given-names>Jens</given-names></string-name> &amp; <string-name><surname>Nivre</surname>, <given-names>Joakim</given-names></string-name> &amp; <string-name><surname>Ahls&#233;n</surname>, <given-names>Elisabeth</given-names></string-name>. <year>1992</year>. <article-title>On the semantics and pragmatics of linguistic feedback</article-title>. <source>Journal of Semantics</source> <volume>9</volume>(<issue>1</issue>). <fpage>1</fpage>&#8211;<lpage>26</lpage>. DOI: <pub-id pub-id-type="doi">10.1093/jos/9.1.1</pub-id></mixed-citation></ref>
<ref id="B6"><mixed-citation publication-type="journal"><string-name><surname>Ameka</surname>, <given-names>Felix K.</given-names></string-name> &amp; <string-name><surname>Terkourafi</surname>, <given-names>Marina</given-names></string-name>. <year>2019</year>. <article-title>What if&#8230;? Imagining non-Western perspectives on pragmatic theory and practice</article-title>. <source>Journal of Pragmatics</source> <volume>145</volume>. <fpage>72</fpage>&#8211;<lpage>82</lpage>. DOI: <pub-id pub-id-type="doi">10.1016/j.pragma.2019.04.001</pub-id></mixed-citation></ref>
<ref id="B7"><mixed-citation publication-type="journal"><string-name><surname>Andries</surname>, <given-names>Fien</given-names></string-name> &amp; <string-name><surname>Meissl</surname>, <given-names>Katharina</given-names></string-name> &amp; <string-name><surname>Vries</surname>, <given-names>Clarissa de</given-names></string-name> &amp; <string-name><surname>Feyaerts</surname>, <given-names>Kurt</given-names></string-name> &amp; <string-name><surname>Oben</surname>, <given-names>Bert</given-names></string-name> &amp; <string-name><surname>Sambre</surname>, <given-names>Paul</given-names></string-name> &amp; <string-name><surname>Vermeerbergen</surname>, <given-names>Myriam</given-names></string-name> &amp; <string-name><surname>Br&#244;ne</surname>, <given-names>Geert</given-names></string-name>. <year>2023</year>. <article-title>Multimodal stance-taking in interaction&#8212;A systematic literature review</article-title>. <source>Frontiers in Communication</source> <volume>8</volume>. <elocation-id>1187977</elocation-id>. DOI: <pub-id pub-id-type="doi">10.3389/fcomm.2023.1187977</pub-id></mixed-citation></ref>
<ref id="B8"><mixed-citation publication-type="book"><string-name><surname>Backer</surname>, <given-names>Charlotte</given-names></string-name>. <year>1977</year>. <chapter-title>Regulators and turn-taking in American Sign Language discourse</chapter-title>. In <string-name><surname>Friedman</surname>, <given-names>Lynn</given-names></string-name> (ed.), <source>On the other hand: New perspectives on American Sign Language</source>, <fpage>138</fpage>&#8211;<lpage>139</lpage>. <publisher-loc>NY</publisher-loc>: <publisher-name>Academic Press</publisher-name>.</mixed-citation></ref>
<ref id="B9"><mixed-citation publication-type="journal"><string-name><surname>Bauer</surname>, <given-names>Anastasia</given-names></string-name>. <year>2023</year>. <article-title>Russian multimodal conversational data</article-title>. Data Center for the Humanities, University of Cologne. DOI: <pub-id pub-id-type="doi">10.18716/DCH/A.00000016</pub-id></mixed-citation></ref>
<ref id="B10"><mixed-citation publication-type="journal"><string-name><surname>Bauer</surname>, <given-names>Anastasia</given-names></string-name> &amp; <string-name><surname>Kuder</surname>, <given-names>Anna</given-names></string-name> &amp; <string-name><surname>Schulder</surname>, <given-names>Marc</given-names></string-name> &amp; <string-name><surname>Schepens</surname>, <given-names>Job</given-names></string-name>. <year>2024</year>. <article-title>Phonetic differences between affirmative and feedback head nods in German Sign Language (DGS): A pose estimation study</article-title>. <source>PLOS ONE</source> <volume>19</volume>(<issue>5</issue>). <elocation-id>e0304040</elocation-id>. DOI: <pub-id pub-id-type="doi">10.1371/journal.pone.0304040</pub-id></mixed-citation></ref>
<ref id="B11"><mixed-citation publication-type="journal"><string-name><surname>Bauer</surname>, <given-names>Anastasia</given-names></string-name> &amp; <string-name><surname>Kyuseva</surname>, <given-names>Masha</given-names></string-name>. <year>2022</year>. <article-title>New insights into mouthings: Evidence from a corpus-based study of Russian Sign Language</article-title>. <source>Frontiers in Psychology</source> <volume>12</volume>. <elocation-id>779958</elocation-id>. DOI: <pub-id pub-id-type="doi">10.3389/fpsyg.2021.779958</pub-id></mixed-citation></ref>
<ref id="B12"><mixed-citation publication-type="journal"><string-name><surname>Bauer</surname>, <given-names>Anastasia</given-names></string-name> &amp; <string-name><surname>Poryadin</surname>, <given-names>Roman</given-names></string-name>. <year>2023</year>. <article-title>Russian Sign Language conversations</article-title>. Data Center for the Humanities, University of Cologne. DOI: <pub-id pub-id-type="doi">10.18716/DCH/A.00000028</pub-id></mixed-citation></ref>
<ref id="B13"><mixed-citation publication-type="journal"><string-name><surname>Bavelas</surname>, <given-names>Janet</given-names></string-name>. <year>1990</year>. <article-title>Nonverbal and social aspects of discourse in face-to-face interaction</article-title>. <source>Text &#8211; Interdisciplinary Journal for the Study of Discourse</source> <volume>10</volume>(<issue>1&#8211;2</issue>). <fpage>5</fpage>&#8211;<lpage>8</lpage>. DOI: <pub-id pub-id-type="doi">10.1515/text.1.1990.10.1-2.5</pub-id></mixed-citation></ref>
<ref id="B14"><mixed-citation publication-type="journal"><string-name><surname>Bavelas</surname>, <given-names>Janet</given-names></string-name> &amp; <string-name><surname>Chovil</surname>, <given-names>Nicole</given-names></string-name>. <year>2018</year>. <article-title>Some pragmatic functions of conversational facial gestures</article-title>. <source>Gesture</source> <volume>17</volume>(<issue>1</issue>). <fpage>98</fpage>&#8211;<lpage>127</lpage>. DOI: <pub-id pub-id-type="doi">10.1075/gest.00012.bav</pub-id></mixed-citation></ref>
<ref id="B15"><mixed-citation publication-type="journal"><string-name><surname>Bavelas</surname>, <given-names>Janet</given-names></string-name> &amp; <string-name><surname>Chovil</surname>, <given-names>Nicole</given-names></string-name> &amp; <string-name><surname>Lawrie</surname>, <given-names>Douglas</given-names></string-name> &amp; <string-name><surname>Wade</surname>, <given-names>Allan</given-names></string-name>. <year>1992</year>. <article-title>Interactive gestures</article-title>. <source>Discourse Processes</source> <volume>15</volume>(<issue>4</issue>). <fpage>469</fpage>&#8211;<lpage>489</lpage>. DOI: <pub-id pub-id-type="doi">10.1080/01638539209544823</pub-id></mixed-citation></ref>
<ref id="B16"><mixed-citation publication-type="journal"><string-name><surname>Bavelas</surname>, <given-names>Janet</given-names></string-name> &amp; <string-name><surname>Coates</surname>, <given-names>Linda</given-names></string-name> &amp; <string-name><surname>Johnson</surname>, <given-names>Trudy</given-names></string-name>. <year>2000</year>. <article-title>Listeners as co-narrators</article-title>. <source>Journal of Personality and Social Psychology</source> <volume>79</volume>(<issue>6</issue>). <fpage>941</fpage>&#8211;<lpage>952</lpage>. DOI: <pub-id pub-id-type="doi">10.1037/0022-3514.79.6.941</pub-id></mixed-citation></ref>
<ref id="B17"><mixed-citation publication-type="journal"><string-name><surname>Bavelas</surname>, <given-names>Janet</given-names></string-name> &amp; <string-name><surname>Coates</surname>, <given-names>Linda</given-names></string-name> &amp; <string-name><surname>Johnson</surname>, <given-names>Trudy</given-names></string-name>. <year>2002</year>. <article-title>Listener responses as a collaborative process: The role of gaze</article-title>. <source>Journal of Communication</source> <volume>52</volume>(<issue>3</issue>). <fpage>566</fpage>&#8211;<lpage>580</lpage>. DOI: <pub-id pub-id-type="doi">10.1111/j.1460-2466.2002.tb02562.x</pub-id></mixed-citation></ref>
<ref id="B18"><mixed-citation publication-type="journal"><string-name><surname>Beach</surname>, <given-names>Wayne A.</given-names></string-name> <year>1993</year>. <article-title>Transitional regularities for &#8216;casual&#8217; &#8220;Okay&#8221; usages</article-title>. <source>Journal of Pragmatics</source> <volume>19</volume>(<issue>4</issue>). <fpage>325</fpage>&#8211;<lpage>352</lpage>. DOI: <pub-id pub-id-type="doi">10.1016/0378-2166(93)90092-4</pub-id></mixed-citation></ref>
<ref id="B19"><mixed-citation publication-type="book"><string-name><surname>Bendel Larcher</surname>, <given-names>Sylvia</given-names></string-name>. <year>2021</year>. <source>Interaktionsprofil und Pers&#246;nlichkeit: Eine explorative Studie zum Zusammenhang von sprachlichem Verhalten und Pers&#246;nlichkeit</source>. <publisher-loc>G&#246;ttingen</publisher-loc>: <publisher-name>Verlag f&#252;r Gespr&#228;chsforschung</publisher-name>.</mixed-citation></ref>
<ref id="B20"><mixed-citation publication-type="book"><string-name><surname>Bertrand</surname>, <given-names>Roxane</given-names></string-name> &amp; <string-name><surname>Ferr&#233;</surname>, <given-names>Ga&#235;lle</given-names></string-name> &amp; <string-name><surname>Blache</surname>, <given-names>Philippe</given-names></string-name> &amp; <string-name><surname>Espesser</surname>, <given-names>Robert</given-names></string-name> &amp; <string-name><surname>Rauzy</surname>, <given-names>St&#233;phane</given-names></string-name>. <year>2007</year>. <chapter-title>Backchannels revisited from a multimodal perspective</chapter-title>. <source>Proc. Auditory-Visual Speech Processing</source>, <fpage>1</fpage>&#8211;<lpage>5</lpage>. <publisher-name>Hilvarenbeek, Netherlands</publisher-name>.</mixed-citation></ref>
<ref id="B21"><mixed-citation publication-type="journal"><string-name><surname>Bilous</surname>, <given-names>Frances R.</given-names></string-name> &amp; <string-name><surname>Krauss</surname>, <given-names>Robert M.</given-names></string-name> <year>1988</year>. <article-title>Dominance and accommodation in the conversational behaviours of same- and mixed-gender dyads</article-title>. <source>Language &amp; Communication</source> <volume>8</volume>(<issue>3&#8211;4</issue>). <fpage>183</fpage>&#8211;<lpage>194</lpage>. DOI: <pub-id pub-id-type="doi">10.1016/0271-5309(88)90016-X</pub-id></mixed-citation></ref>
<ref id="B22"><mixed-citation publication-type="journal"><string-name><surname>Blomsma</surname>, <given-names>Peter</given-names></string-name> &amp; <string-name><surname>Skantze</surname>, <given-names>Gabriel</given-names></string-name> &amp; <string-name><surname>Swerts</surname>, <given-names>Marc</given-names></string-name>. <year>2022</year>. <article-title>Backchannel behavior influences the perceived personality of human and artificial communication partners</article-title>. <source>Frontiers in Artificial Intelligence</source> <volume>5</volume>. <elocation-id>835298</elocation-id>. DOI: <pub-id pub-id-type="doi">10.3389/frai.2022.835298</pub-id></mixed-citation></ref>
<ref id="B23"><mixed-citation publication-type="journal"><string-name><surname>Blomsma</surname>, <given-names>Peter</given-names></string-name> &amp; <string-name><surname>Vaitonyt&#233;</surname>, <given-names>Julija</given-names></string-name> &amp; <string-name><surname>Skantze</surname>, <given-names>Gabriel</given-names></string-name> &amp; <string-name><surname>Swerts</surname>, <given-names>Marc</given-names></string-name>. <year>2024</year>. <article-title>Backchannel behavior is idiosyncratic</article-title>. <source>Language and Cognition</source> <volume>16</volume>(<issue>4</issue>). <fpage>1</fpage>&#8211;<lpage>24</lpage>. DOI: <pub-id pub-id-type="doi">10.1017/langcog.2024.1</pub-id></mixed-citation></ref>
<ref id="B24"><mixed-citation publication-type="journal"><string-name><surname>B&#246;rstell</surname>, <given-names>Carl</given-names></string-name>. <year>2024</year>. <article-title>Finding continuers in Swedish Sign Language</article-title>. <source>Linguistics Vanguard</source> <volume>10</volume>(<issue>1</issue>) <fpage>537</fpage>&#8211;<lpage>548</lpage>. DOI: <pub-id pub-id-type="doi">10.1515/lingvan-2024-0025</pub-id></mixed-citation></ref>
<ref id="B25"><mixed-citation publication-type="journal"><string-name><surname>Boudin</surname>, <given-names>Auriane</given-names></string-name> &amp; <string-name><surname>Bertrand</surname>, <given-names>Roxane</given-names></string-name> &amp; <string-name><surname>Rauzy</surname>, <given-names>St&#233;phane</given-names></string-name> &amp; <string-name><surname>Ochs</surname>, <given-names>Magalie</given-names></string-name> &amp; <string-name><surname>Blache</surname>, <given-names>Philippe</given-names></string-name>. <year>2024</year>. <article-title>A multimodal model for predicting feedback position and type during conversation</article-title>. <source>Speech Communication</source> <volume>159</volume>. <elocation-id>103066</elocation-id>. DOI: <pub-id pub-id-type="doi">10.1016/j.specom.2024.103066</pub-id></mixed-citation></ref>
<ref id="B26"><mixed-citation publication-type="journal"><string-name><surname>Brunner</surname>, <given-names>Lawrence J.</given-names></string-name> <year>1979</year>. <article-title>Smiles can be back channels</article-title>. <source>Journal of Personality and Social Psychology</source> <volume>37</volume>(<issue>5</issue>). <fpage>728</fpage>&#8211;<lpage>734</lpage>. DOI: <pub-id pub-id-type="doi">10.1037/0022-3514.37.5.728</pub-id></mixed-citation></ref>
<ref id="B27"><mixed-citation publication-type="webpage"><string-name><surname>Burkova</surname>, <given-names>Svetlana</given-names></string-name>. <year>2015</year>. <chapter-title>Russian Sign Language Corpus</chapter-title>. <uri>http://rsl.nstu.ru/</uri>.</mixed-citation></ref>
<ref id="B28"><mixed-citation publication-type="journal"><string-name><surname>Byun</surname>, <given-names>Kang-Suk</given-names></string-name> &amp; <string-name><surname>de Vos</surname>, <given-names>Connie</given-names></string-name> &amp; <string-name><surname>Bradford</surname>, <given-names>Anastasia</given-names></string-name> &amp; <string-name><surname>Zeshan</surname>, <given-names>Ulrike</given-names></string-name> &amp; <string-name><surname>Levinson</surname>, <given-names>Stephen</given-names></string-name>. <year>2018</year>. <article-title>First encounters: Repair sequences in cross-signing</article-title>. <source>Topics in Cognitive Science</source> <volume>10</volume>(<issue>2</issue>). <fpage>314</fpage>&#8211;<lpage>334</lpage>. DOI: <pub-id pub-id-type="doi">10.1111/tops.12303</pub-id></mixed-citation></ref>
<ref id="B29"><mixed-citation publication-type="journal"><string-name><surname>Cassell</surname>, <given-names>Justine</given-names></string-name> &amp; <string-name><surname>Thorisson</surname>, <given-names>Kristinn R.</given-names></string-name> <year>1999</year>. <article-title>The power of a nod and a glance: Envelope vs. emotional feedback in animated conversational agents</article-title>. <source>Applied Artificial Intelligence</source> <volume>13</volume>(<issue>4&#8211;5</issue>). <fpage>519</fpage>&#8211;<lpage>538</lpage>. DOI: <pub-id pub-id-type="doi">10.1080/088395199117360</pub-id></mixed-citation></ref>
<ref id="B30"><mixed-citation publication-type="webpage"><string-name><surname>Cerrato</surname>, <given-names>Loredana</given-names></string-name> &amp; <string-name><surname>Skhiri</surname>, <given-names>Mustapha</given-names></string-name>. <year>2003</year>. <article-title>A method for the analysis and measurement of communicative head movements in human dialogues</article-title>. In <source>Proceedings of AVSP &#8211; International conference on audio-visual speech processing</source>, <fpage>251</fpage>&#8211;<lpage>256</lpage>. <uri>https://www.isca-archive.org/avsp_2003/cerrato03_avsp.pdf</uri>.</mixed-citation></ref>
<ref id="B31"><mixed-citation publication-type="journal"><string-name><surname>Charrad</surname>, <given-names>Malika</given-names></string-name> &amp; <string-name><surname>Ghazzali</surname>, <given-names>Nadia</given-names></string-name> &amp; <string-name><surname>Boiteau</surname>, <given-names>V&#233;ronique</given-names></string-name> &amp; <string-name><surname>Niknafs</surname>, <given-names>Azam</given-names></string-name>. <year>2014</year>. <article-title>NbClust: An R package for determining the relevant number of clusters in a data set</article-title>. <source>Journal of Statistical Software</source> <volume>61</volume>. <fpage>1</fpage>&#8211;<lpage>36</lpage>. DOI: <pub-id pub-id-type="doi">10.18637/jss.v061.i06</pub-id></mixed-citation></ref>
<ref id="B32"><mixed-citation publication-type="journal"><string-name><surname>Clancy</surname>, <given-names>Patricia M.</given-names></string-name> &amp; <string-name><surname>Thompson</surname>, <given-names>Sandra A.</given-names></string-name> &amp; <string-name><surname>Suzuki</surname>, <given-names>Ryoko</given-names></string-name> &amp; <string-name><surname>Tao</surname>, <given-names>Hongyin</given-names></string-name>. <year>1996</year>. <article-title>The conversational use of reactive tokens in English, Japanese, and Mandarin</article-title>. <source>Journal of Pragmatics</source> <volume>26</volume>(<issue>3</issue>). <fpage>355</fpage>&#8211;<lpage>387</lpage>. DOI: <pub-id pub-id-type="doi">10.1016/0378-2166(95)00036-4</pub-id></mixed-citation></ref>
<ref id="B33"><mixed-citation publication-type="journal"><string-name><surname>Coates</surname>, <given-names>Jennifer</given-names></string-name> &amp; <string-name><surname>Sutton-Spence</surname>, <given-names>Rachel</given-names></string-name>. <year>2001</year>. <article-title>Turn-taking patterns in Deaf conversation</article-title>. <source>Journal of Sociolinguistics</source> <volume>5</volume>(<issue>4</issue>). <fpage>507</fpage>&#8211;<lpage>529</lpage>. DOI: <pub-id pub-id-type="doi">10.1111/1467-9481.00162</pub-id></mixed-citation></ref>
<ref id="B34"><mixed-citation publication-type="book"><string-name><surname>Cohn</surname>, <given-names>Neil</given-names></string-name> &amp; <string-name><surname>Schilperoord</surname>, <given-names>Joost</given-names></string-name>. <year>2024</year>. <source>A multimodal language faculty: A cognitive framework for human communication</source>. <publisher-loc>London/New York/Dublin</publisher-loc>: <publisher-name>Bloomsbury</publisher-name>. DOI: <pub-id pub-id-type="doi">10.5040/9781350404861</pub-id></mixed-citation></ref>
<ref id="B35"><mixed-citation publication-type="journal"><string-name><surname>Crasborn</surname>, <given-names>Onno</given-names></string-name> &amp; <string-name><surname>Sloetjes</surname>, <given-names>Han</given-names></string-name>. <year>2008</year>. <article-title>Enhanced ELAN functionality for sign language corpora</article-title>. In <source>6th International Conference on Language Resources and Evaluation (LREC 2008) / 3rd Workshop on the Representation and Processing of Sign Languages: Construction and Exploitation of Sign Language corpora</source>, <fpage>39</fpage>&#8211;<lpage>43</lpage>.</mixed-citation></ref>
<ref id="B36"><mixed-citation publication-type="journal"><string-name><surname>Dideriksen</surname>, <given-names>Christina</given-names></string-name> &amp; <string-name><surname>Christiansen</surname>, <given-names>Morten H.</given-names></string-name> &amp; <string-name><surname>Dingemanse</surname>, <given-names>Mark</given-names></string-name> &amp; <string-name><surname>H&#248;jmark-Bertelsen</surname>, <given-names>Malte</given-names></string-name> &amp; <string-name><surname>Johansson</surname>, <given-names>Christer</given-names></string-name> &amp; <string-name><surname>Tyl&#233;n</surname>, <given-names>Kristian</given-names></string-name> &amp; <string-name><surname>Fusaroli</surname>, <given-names>Riccardo</given-names></string-name>. <year>2023</year>. <article-title>Language-specific constraints on conversation: Evidence from Danish and Norwegian</article-title>. <source>Cognitive Science</source> <volume>47</volume>(<issue>11</issue>). DOI: <pub-id pub-id-type="doi">10.1111/cogs.13387</pub-id></mixed-citation></ref>
<ref id="B37"><mixed-citation publication-type="journal"><string-name><surname>Dingemanse</surname>, <given-names>Mark</given-names></string-name> &amp; <string-name><surname>Enfield</surname>, <given-names>N. J.</given-names></string-name> <year>2015</year>. <article-title>Other-initiated repair across languages: Towards a typology of conversational structures</article-title>. <source>Open Linguistics</source> <volume>1</volume>(<issue>1</issue>). <fpage>96</fpage>&#8211;<lpage>118</lpage>. DOI: <pub-id pub-id-type="doi">10.2478/opli-2014-0007</pub-id></mixed-citation></ref>
<ref id="B38"><mixed-citation publication-type="book"><string-name><surname>Dingemanse</surname>, <given-names>Mark</given-names></string-name> &amp; <string-name><surname>Liesenfeld</surname>, <given-names>Andreas</given-names></string-name> &amp; <string-name><surname>Woensdregt</surname>, <given-names>Marieke</given-names></string-name>. <year>2022</year>. <chapter-title>Convergent cultural evolution of continuers (mhmm)</chapter-title>. In <string-name><surname>Ravignani</surname>, <given-names>Andrea</given-names></string-name> &amp; <string-name><surname>Asano</surname>, <given-names>Rie</given-names></string-name> &amp; <string-name><surname>Valente</surname>, <given-names>Daria</given-names></string-name> &amp; <string-name><surname>Ferretti</surname>, <given-names>Francesco</given-names></string-name> &amp; <string-name><surname>Hartmann</surname>, <given-names>Stefan</given-names></string-name> &amp; <string-name><surname>Hayashi</surname>, <given-names>Misato</given-names></string-name> &amp; <string-name><surname>Jadoul</surname>, <given-names>Yannick</given-names></string-name> &amp; <string-name><surname>Martins</surname>, <given-names>Mauricio</given-names></string-name> &amp; <string-name><surname>Oseki</surname>, <given-names>Yoshei</given-names></string-name> &amp; <string-name><surname>Rodrigues</surname>, <given-names>Evelina Daniela</given-names></string-name> &amp; <string-name><surname>Vasileva</surname>, <given-names>Olga</given-names></string-name> &amp; <string-name><surname>Wacewicz</surname>, <given-names>Slawomir</given-names></string-name> (eds.), <source>The evolution of language: Proceedings of the joint conference on language evolution (JCoLE)</source>, <fpage>160</fpage>&#8211;<lpage>167</lpage>. DOI: <pub-id pub-id-type="doi">10.31234/osf.io/65c79</pub-id></mixed-citation></ref>
<ref id="B39"><mixed-citation publication-type="journal"><string-name><surname>Dingemanse</surname>, <given-names>Mark</given-names></string-name> &amp; <string-name><surname>Roberts</surname>, <given-names>Se&#225;n G.</given-names></string-name> &amp; <string-name><surname>Baranova</surname>, <given-names>Julija</given-names></string-name> &amp; <string-name><surname>Blythe</surname>, <given-names>Joe</given-names></string-name> &amp; <string-name><surname>Drew</surname>, <given-names>Paul</given-names></string-name> &amp; <string-name><surname>Floyd</surname>, <given-names>Simeon</given-names></string-name> &amp; <string-name><surname>Gisladottir</surname>, <given-names>Rosa S.</given-names></string-name> &amp; <string-name><surname>Kendrick</surname>, <given-names>Kobin H.</given-names></string-name> &amp; <string-name><surname>Levinson</surname>, <given-names>Stephen</given-names></string-name> &amp; <string-name><surname>Manrique</surname>, <given-names>Elizabeth</given-names></string-name> &amp; <string-name><surname>Rossi</surname>, <given-names>Giovanni</given-names></string-name> &amp; <string-name><surname>Enfield</surname>, <given-names>Nick</given-names></string-name>. <year>2015</year>. <article-title>Universal principles in the repair of communication problems</article-title>. <source>PLoS ONE</source> <volume>10</volume>(<issue>9</issue>). <elocation-id>e0136100</elocation-id>. DOI: <pub-id pub-id-type="doi">10.1371/journal.pone.0136100</pub-id></mixed-citation></ref>
<ref id="B40"><mixed-citation publication-type="journal"><string-name><surname>Dittmann</surname>, <given-names>Allen T.</given-names></string-name> &amp; <string-name><surname>Llewellyn</surname>, <given-names>Lynn G.</given-names></string-name> <year>1968</year>. <article-title>Relationship between vocalizations and head nods as listener responses</article-title>. <source>Journal of Personality and Social Psychology</source> <volume>9</volume>(<issue>1</issue>). <fpage>79</fpage>&#8211;<lpage>84</lpage>. DOI: <pub-id pub-id-type="doi">10.1037/h0025722</pub-id></mixed-citation></ref>
<ref id="B41"><mixed-citation publication-type="journal"><string-name><surname>Drummond</surname>, <given-names>Kent</given-names></string-name> &amp; <string-name><surname>Hopper</surname>, <given-names>Robert</given-names></string-name>. <year>1993</year>. <article-title>Back channels revisited: Acknowledgment tokens and speakership incipiency</article-title>. <source>Research on Language &amp; Social Interaction</source> <volume>26</volume>(<issue>2</issue>). <fpage>157</fpage>&#8211;<lpage>177</lpage>. DOI: <pub-id pub-id-type="doi">10.1207/s15327973rlsi2602_3</pub-id></mixed-citation></ref>
<ref id="B42"><mixed-citation publication-type="journal"><string-name><surname>Duncan</surname>, <given-names>Starkey</given-names></string-name>. <year>1974</year>. <article-title>On the structure of speaker&#8211;auditor interaction during speaking turns</article-title>. <source>Language in Society</source> <volume>3</volume>(<issue>2</issue>). <fpage>161</fpage>&#8211;<lpage>180</lpage>. DOI: <pub-id pub-id-type="doi">10.1017/S0047404500004322</pub-id></mixed-citation></ref>
<ref id="B43"><mixed-citation publication-type="journal"><string-name><surname>Esselink</surname>, <given-names>L. D.</given-names></string-name> &amp; <string-name><surname>Oomen</surname>, <given-names>M.</given-names></string-name> &amp; <string-name><surname>Roelofsen</surname>, <given-names>Floris</given-names></string-name>. <year>2024</year>. <article-title>Technical report: Evaluating inter-annotator agreement for non-manual markers in sign languages</article-title>. DOI: <pub-id pub-id-type="doi">10.21942/UVA.25563540.V2</pub-id>.</mixed-citation></ref>
<ref id="B44"><mixed-citation publication-type="book"><string-name><surname>Fenlon</surname>, <given-names>Jordan</given-names></string-name> &amp; <string-name><surname>Schembri</surname>, <given-names>Adam C.</given-names></string-name> &amp; <string-name><surname>Sutton-Spence</surname>, <given-names>Rachel</given-names></string-name>. <year>2013</year>. <chapter-title>Turn-taking and backchannel behaviour in British Sign Language conversations</chapter-title>. Poster presented at the <italic>11th Theoretical Issues in Sign Language Research Conference</italic>, <publisher-name>University College London</publisher-name>.</mixed-citation></ref>
<ref id="B45"><mixed-citation publication-type="webpage"><string-name><surname>Fujimoto</surname>, <given-names>Donna T.</given-names></string-name> <year>2009</year>. <article-title>Listener responses in interaction: A case for abandoning the term, backchannel</article-title>. <source>Bulletin paper of Osaka Jogakuin College</source> <volume>37</volume>. <fpage>35</fpage>&#8211;<lpage>54</lpage>. <uri>http://ir-lib.wilmina.ac.jp/dspace/bitstream/10775/48/1/03.pdf</uri>.</mixed-citation></ref>
<ref id="B46"><mixed-citation publication-type="book"><string-name><surname>Gardner</surname>, <given-names>Rod</given-names></string-name>. <year>2001</year>. <source>When listeners talk: Response tokens and listener stance</source>. <publisher-loc>Amsterdam/Philadelphia</publisher-loc>: <publisher-name>John Benjamins</publisher-name>. DOI: <pub-id pub-id-type="doi">10.1075/pbns.92</pub-id></mixed-citation></ref>
<ref id="B47"><mixed-citation publication-type="journal"><string-name><surname>Gipper</surname>, <given-names>Sonja</given-names></string-name> &amp; <string-name><surname>K&#246;nig</surname>, <given-names>Katharina</given-names></string-name> &amp; <string-name><surname>Weber</surname>, <given-names>Kathrin</given-names></string-name>. <year>2023</year>. <article-title>Structurally similar formats are not functionally equivalent across languages: Requests for reconfirmation in comparative perspective</article-title>. <source>Contrastive Pragmatics</source> <volume>5</volume>(<issue>1&#8211;2</issue>). <fpage>195</fpage>&#8211;<lpage>237</lpage>. DOI: <pub-id pub-id-type="doi">10.1163/26660393-bja10097</pub-id></mixed-citation></ref>
<ref id="B48"><mixed-citation publication-type="journal"><string-name><surname>Girard-Groeber</surname>, <given-names>Simone</given-names></string-name>. <year>2015</year>. <article-title>The management of turn transition in signed interaction through the lens of overlaps</article-title>. <source>Frontiers in Psychology</source> <volume>6</volume>. <elocation-id>741</elocation-id>. DOI: <pub-id pub-id-type="doi">10.3389/fpsyg.2015.00741</pub-id></mixed-citation></ref>
<ref id="B49"><mixed-citation publication-type="journal"><string-name><surname>Gironzetti</surname>, <given-names>Elisa</given-names></string-name> &amp; <string-name><surname>Pickering</surname>, <given-names>Lucy</given-names></string-name> &amp; <string-name><surname>Huang</surname>, <given-names>Meichan</given-names></string-name> &amp; <string-name><surname>Zhang</surname>, <given-names>Ying</given-names></string-name> &amp; <string-name><surname>Menjo</surname>, <given-names>Shigehito</given-names></string-name> &amp; <string-name><surname>Attardo</surname>, <given-names>Salvatore</given-names></string-name>. <year>2016</year>. <article-title>Smiling synchronicity and gaze patterns in dyadic humorous conversations</article-title>. <source>HUMOR</source> <volume>29</volume>(<issue>2</issue>). <fpage>301</fpage>&#8211;<lpage>324</lpage>. DOI: <pub-id pub-id-type="doi">10.1515/humor-2016-0005</pub-id></mixed-citation></ref>
<ref id="B50"><mixed-citation publication-type="journal"><string-name><surname>Goldin-Meadow</surname>, <given-names>Susan</given-names></string-name> &amp; <string-name><surname>Beilock</surname>, <given-names>Sian L.</given-names></string-name> <year>2010</year>. <article-title>Action&#8217;s influence on thought: The case of gesture</article-title>. <source>Perspectives on Psychological Science</source> <volume>5</volume>(<issue>6</issue>). <fpage>664</fpage>&#8211;<lpage>674</lpage>. DOI: <pub-id pub-id-type="doi">10.1177/1745691610388764</pub-id></mixed-citation></ref>
<ref id="B51"><mixed-citation publication-type="journal"><string-name><surname>Goodwin</surname>, <given-names>Charles</given-names></string-name>. <year>1986</year>. <article-title>Gestures as a resource for the organization of mutual orientation</article-title>. <source>Semiotica</source> <volume>62</volume>(<issue>1&#8211;2</issue>). <fpage>29</fpage>&#8211;<lpage>50</lpage>. DOI: <pub-id pub-id-type="doi">10.1515/semi.1986.62.1-2.29</pub-id></mixed-citation></ref>
<ref id="B52"><mixed-citation publication-type="book"><string-name><surname>Gregori</surname>, <given-names>Alina</given-names></string-name> &amp; <string-name><surname>Amici</surname>, <given-names>Federica</given-names></string-name> &amp; <string-name><surname>Brilmayer</surname>, <given-names>Ingmar</given-names></string-name> &amp; <string-name><surname>&#262;wiek</surname>, <given-names>Aleksandra</given-names></string-name> &amp; <string-name><surname>Fritzsche</surname>, <given-names>Lennart</given-names></string-name> &amp; <string-name><surname>Fuchs</surname>, <given-names>Susanne</given-names></string-name> &amp; <string-name><surname>Henlein</surname>, <given-names>Alexander</given-names></string-name> &amp; <string-name><surname>Herbort</surname>, <given-names>Oliver</given-names></string-name> &amp; <string-name><surname>K&#252;gler</surname>, <given-names>Frank</given-names></string-name> &amp; <string-name><surname>Lemanski</surname>, <given-names>Jens</given-names></string-name> &amp; <string-name><surname>Liebal</surname>, <given-names>Katja</given-names></string-name> &amp; <string-name><surname>L&#252;cking</surname>, <given-names>Andy</given-names></string-name> &amp; <string-name><surname>Mehler</surname>, <given-names>Alexander</given-names></string-name> &amp; <string-name><surname>Nguyen</surname>, <given-names>Kim Tien</given-names></string-name> &amp; <string-name><surname>Pouw</surname>, <given-names>Wim</given-names></string-name> &amp; <string-name><surname>Prieto</surname>, <given-names>Pilar</given-names></string-name> &amp; <string-name><surname>Rohrer</surname>, <given-names>Patrick Louis</given-names></string-name> &amp; <string-name><surname>S&#225;nchez-Ram&#243;n</surname>, <given-names>Paula G.</given-names></string-name> &amp; <string-name><surname>Schulte-R&#252;ther</surname>, <given-names>Martin</given-names></string-name> &amp; <string-name><surname>Schumacher</surname>, <given-names>Petra B.</given-names></string-name> &amp; <string-name><surname>Schweinberger</surname>, <given-names>Stefan R.</given-names></string-name> &amp; <string-name><surname>Struckmeier</surname>, <given-names>Volker</given-names></string-name> &amp; <string-name><surname>Trettenbrein</surname>, <given-names>Patrick C.</given-names></string-name> &amp; <string-name><surname>Von Eiff</surname>, <given-names>Celina I.</given-names></string-name> <year>2023</year>. <chapter-title>A roadmap for technological innovation in multimodal communication research</chapter-title>. In <string-name><surname>Duffy</surname>, <given-names>Vincent G.</given-names></string-name> (ed.), <source>Digital human modeling and applications in health, safety, ergonomics and risk management</source>, <fpage>402</fpage>&#8211;<lpage>438</lpage>. <publisher-loc>Cham</publisher-loc>: <publisher-name>Springer</publisher-name>. DOI: <pub-id pub-id-type="doi">10.1007/978-3-031-35748-0_30</pub-id></mixed-citation></ref>
<ref id="B53"><mixed-citation publication-type="webpage"><string-name><surname>Gwet</surname>, <given-names>Kilem L.</given-names></string-name> <year>2019</year>. <article-title>irrCAC: Computing chance-corrected agreement coefficients (CAC)</article-title>. R-package. <uri>https://cran.r-project.org/web/packages/irrCAC/irrCAC.pdf</uri></mixed-citation></ref>
<ref id="B54"><mixed-citation publication-type="journal"><string-name><surname>Hadar</surname>, <given-names>Uri</given-names></string-name> &amp; <string-name><surname>Steiner</surname>, <given-names>Timothy</given-names></string-name> &amp; <string-name><surname>Rose</surname>, <given-names>F. Clifford</given-names></string-name>. <year>1985</year>. <article-title>Head movement during listening turns in conversation</article-title>. <source>Journal of Nonverbal Behavior</source> <volume>9</volume>(<issue>4</issue>). <fpage>214</fpage>&#8211;<lpage>228</lpage>. DOI: <pub-id pub-id-type="doi">10.1007/BF00986881</pub-id></mixed-citation></ref>
<ref id="B55"><mixed-citation publication-type="journal"><string-name><surname>Hamilton</surname>, <given-names>Antonia F. De C.</given-names></string-name> &amp; <string-name><surname>Holler</surname>, <given-names>Judith</given-names></string-name>. <year>2023</year>. <article-title>Face2face: Advancing the science of social interaction</article-title>. <source>Philosophical Transactions of the Royal Society B: Biological Sciences</source> <volume>378</volume>(<issue>1875</issue>). <elocation-id>20210470</elocation-id>. DOI: <pub-id pub-id-type="doi">10.1098/rstb.2021.0470</pub-id></mixed-citation></ref>
<ref id="B56"><mixed-citation publication-type="webpage"><string-name><surname>Hanke</surname>, <given-names>Thomas</given-names></string-name> &amp; <string-name><surname>Schulder</surname>, <given-names>Marc</given-names></string-name> &amp; <string-name><surname>Konrad</surname>, <given-names>Reiner</given-names></string-name> &amp; <string-name><surname>Jahn</surname>, <given-names>Elena</given-names></string-name>. <year>2020</year>. <chapter-title>Extending the Public DGS Corpus in Size and Depth</chapter-title>. In <string-name><surname>Efthimiou</surname>, <given-names>Eleni</given-names></string-name> &amp; <string-name><surname>Fotinea</surname>, <given-names>Stavroula-Evita</given-names></string-name> &amp; <string-name><surname>Hanke</surname>, <given-names>Thomas</given-names></string-name> &amp; <string-name><surname>Hochgesang</surname>, <given-names>Julie A.</given-names></string-name> &amp; <string-name><surname>Kristoffersen</surname>, <given-names>Jette</given-names></string-name> &amp; <string-name><surname>Mesch</surname>, <given-names>Johanna</given-names></string-name> (eds.), <source>Proceedings of the LREC2020 9th workshop on the representation and processing of Sign Languages: Sign language resources in the service of the language community, technological challenges and application perspectives</source>, <fpage>75</fpage>&#8211;<lpage>82</lpage>. <publisher-loc>Marseille, France</publisher-loc>: <publisher-name>European Language Resources Association (ELRA)</publisher-name>. <uri>https://www.sign-lang.uni-hamburg.de/lrec/pub/20016.pdf</uri>.</mixed-citation></ref>
<ref id="B57"><mixed-citation publication-type="book"><string-name><surname>Henlein</surname>, <given-names>Alexander</given-names></string-name> &amp; <string-name><surname>Bauer</surname>, <given-names>Anastasia</given-names></string-name> &amp; <string-name><surname>Bhattacharjee</surname>, <given-names>Reetu</given-names></string-name> &amp; <string-name><surname>&#262;wiek</surname>, <given-names>Aleksandra</given-names></string-name> &amp; <string-name><surname>Gregori</surname>, <given-names>Alina</given-names></string-name> &amp; <string-name><surname>K&#252;gler</surname>, <given-names>Frank</given-names></string-name> &amp; <string-name><surname>Lemanski</surname>, <given-names>Jens</given-names></string-name> &amp; <string-name><surname>L&#252;cking</surname>, <given-names>Andy</given-names></string-name> &amp; <string-name><surname>Mehler</surname>, <given-names>Alexander</given-names></string-name> &amp; <string-name><surname>Prieto</surname>, <given-names>Pilar</given-names></string-name> &amp; <string-name><surname>S&#225;nchez-Ram&#243;n</surname>, <given-names>Paula G.</given-names></string-name> &amp; <string-name><surname>Schepens</surname>, <given-names>Job</given-names></string-name> &amp; <string-name><surname>Schulte-R&#252;ther</surname>, <given-names>Martin</given-names></string-name> &amp; <string-name><surname>Schweinberger</surname>, <given-names>Stefan R.</given-names></string-name> &amp; <string-name><surname>Von Eiff</surname>, <given-names>Celina I.</given-names></string-name> <year>2024</year>. <chapter-title>An outlook for AI innovation in multimodal communication research</chapter-title>. In <string-name><surname>Duffy</surname>, <given-names>Vincent G.</given-names></string-name> (ed.), <source>Digital human modeling and applications in health, safety, ergonomics and risk management</source>, <fpage>182</fpage>&#8211;<lpage>234</lpage>. <publisher-loc>Cham</publisher-loc>: <publisher-name>Springer</publisher-name>. DOI: <pub-id pub-id-type="doi">10.1007/978-3-031-61066-0_13</pub-id></mixed-citation></ref>
<ref id="B58"><mixed-citation publication-type="book"><string-name><surname>Heritage</surname>, <given-names>John</given-names></string-name>. <year>1984</year>. <chapter-title>A change-of-state token and aspects of its sequential placement</chapter-title>. In <string-name><surname>Atkinson</surname>, <given-names>J. Maxwell</given-names></string-name> (ed.), <source>Structures of social action: Studies in Conversation Analysis</source>, <fpage>299</fpage>&#8211;<lpage>345</lpage>. <publisher-loc>Cambridge</publisher-loc>: <publisher-name>Cambridge University Press</publisher-name>. DOI: <pub-id pub-id-type="doi">10.1017/CBO9780511665868.020</pub-id></mixed-citation></ref>
<ref id="B59"><mixed-citation publication-type="webpage"><string-name><surname>Herrmann</surname>, <given-names>Annika</given-names></string-name>. <year>2020</year>. <chapter-title>Prosody: Back-channeling</chapter-title>. In <string-name><surname>Proske</surname>, <given-names>Sina</given-names></string-name> &amp; <string-name><surname>Herrmann</surname>, <given-names>Annika</given-names></string-name> &amp; <string-name><surname>Hosemann</surname>, <given-names>Jana</given-names></string-name> &amp; <string-name><surname>Steinbach</surname>, <given-names>Markus</given-names></string-name> (eds.), <source>A grammar of German Sign Language (DGS)</source> (SIGN-HUB Sign Language Grammar Series 71), <edition>1st</edition> edn. <uri>https://thesignhub.eu/grammar/dgs?tag=100</uri>.</mixed-citation></ref>
<ref id="B60"><mixed-citation publication-type="journal"><string-name><surname>Hess</surname>, <given-names>Lucille J.</given-names></string-name> &amp; <string-name><surname>Johnston</surname>, <given-names>Judith R.</given-names></string-name> <year>1988</year>. <article-title>Acquisition of back channel listener responses to adequate messages</article-title>. <source>Discourse Processes</source> <volume>11</volume>(<issue>3</issue>). <fpage>319</fpage>&#8211;<lpage>335</lpage>. DOI: <pub-id pub-id-type="doi">10.1080/01638538809544706</pub-id></mixed-citation></ref>
<ref id="B61"><mixed-citation publication-type="webpage"><string-name><surname>Hodge</surname>, <given-names>Gabrielle</given-names></string-name> &amp; <string-name><surname>Barth</surname>, <given-names>Danielle</given-names></string-name> &amp; <string-name><surname>Reed</surname>, <given-names>Lauren W.</given-names></string-name> <year>2023</year>. <article-title>Auslan and Matukar Panau: A modality-agnostic look at quotatives</article-title>. <source>The Social Cognition Parallax Interview Corpus (SCOPIC). Language Documentation &amp; Conservation Special Publication</source> <volume>12</volume>. <fpage>85</fpage>&#8211;<lpage>125</lpage>. <uri>https://hdl.handle.net/10125/24744</uri></mixed-citation></ref>
<ref id="B62"><mixed-citation publication-type="journal"><string-name><surname>Hoffmann</surname>, <given-names>Bettina</given-names></string-name> &amp; <string-name><surname>Himmelmann</surname>, <given-names>Nikolaus P.</given-names></string-name> <year>2009</year>. <article-title>M&#252;nster Videokorpus Alltagsgespr&#228;che</article-title>. Unpublished corpus of spoken German.</mixed-citation></ref>
<ref id="B63"><mixed-citation publication-type="journal"><string-name><surname>Holler</surname>, <given-names>Judith</given-names></string-name>. <year>2025</year>. <article-title>Facial clues to conversational intentions</article-title>. <source>Trends in Cognitive Sciences</source> <volume>29</volume>(<issue>8</issue>). <fpage>750</fpage>&#8211;<lpage>762</lpage>. DOI: <pub-id pub-id-type="doi">10.1016/j.tics.2025.03.006</pub-id></mixed-citation></ref>
<ref id="B64"><mixed-citation publication-type="journal"><string-name><surname>Holler</surname>, <given-names>Judith</given-names></string-name> &amp; <string-name><surname>Levinson</surname>, <given-names>Stephen C.</given-names></string-name> <year>2019</year>. <article-title>Multimodal language processing in human communication</article-title>. <source>Trends in Cognitive Sciences</source> <volume>23</volume>(<issue>8</issue>). <fpage>639</fpage>&#8211;<lpage>652</lpage>. DOI: <pub-id pub-id-type="doi">10.1016/j.tics.2019.05.006</pub-id></mixed-citation></ref>
<ref id="B65"><mixed-citation publication-type="journal"><string-name><surname>H&#246;mke</surname>, <given-names>Paul</given-names></string-name> &amp; <string-name><surname>Holler</surname>, <given-names>Judith</given-names></string-name> &amp; <string-name><surname>Levinson</surname>, <given-names>Stephen</given-names></string-name>. <year>2017</year>. <article-title>Eye blinking as addressee feedback in face-to-face conversation</article-title>. <source>Research on Language and Social Interaction</source> <volume>50</volume>(<issue>1</issue>). <fpage>54</fpage>&#8211;<lpage>70</lpage>. DOI: <pub-id pub-id-type="doi">10.1080/08351813.2017.1262143</pub-id></mixed-citation></ref>
<ref id="B66"><mixed-citation publication-type="journal"><string-name><surname>Iverson</surname>, <given-names>Jana M.</given-names></string-name> &amp; <string-name><surname>Goldin-Meadow</surname>, <given-names>Susan</given-names></string-name>. <year>1998</year>. <article-title>Why people gesture when they speak</article-title>. <source>Nature</source> <volume>396</volume>(<issue>6708</issue>). <fpage>228</fpage>&#8211;<lpage>228</lpage>. DOI: <pub-id pub-id-type="doi">10.1038/24300</pub-id></mixed-citation></ref>
<ref id="B67"><mixed-citation publication-type="book"><string-name><surname>Jefferson</surname>, <given-names>Gail</given-names></string-name>. <year>1984</year>. <chapter-title>On the organization of laughter in talk about troubles</chapter-title>. In <string-name><surname>Atkinson</surname>, <given-names>J. Maxwell</given-names></string-name> &amp; <string-name><surname>Heritage</surname>, <given-names>John</given-names></string-name> (eds.), <source>Structures of social action: Studies in Conversation Analysis</source>. <publisher-loc>Cambridge</publisher-loc>: <publisher-name>Cambridge University Press</publisher-name>.</mixed-citation></ref>
<ref id="B68"><mixed-citation publication-type="journal"><string-name><surname>Jefferson</surname>, <given-names>Gail</given-names></string-name>. <year>1993</year>. <article-title>Caveat speaker: Preliminary notes on recipient topic-shift implicature</article-title>. <source>Research on Language &amp; Social Interaction</source> <volume>26</volume>(<issue>1</issue>). <fpage>1</fpage>&#8211;<lpage>30</lpage>. DOI: <pub-id pub-id-type="doi">10.1207/s15327973rlsi2601_1</pub-id></mixed-citation></ref>
<ref id="B69"><mixed-citation publication-type="journal"><string-name><surname>Keevallik</surname>, <given-names>Leelo</given-names></string-name>. <year>2018</year>. <article-title>What does embodied interaction tell us about grammar?</article-title> <source>Research on Language and Social Interaction</source> <volume>51</volume>(<issue>1</issue>). <fpage>1</fpage>&#8211;<lpage>21</lpage>. DOI: <pub-id pub-id-type="doi">10.1080/08351813.2018.1413887</pub-id></mixed-citation></ref>
<ref id="B70"><mixed-citation publication-type="journal"><string-name><surname>Kendon</surname>, <given-names>Adam</given-names></string-name>. <year>1967</year>. <article-title>Some functions of gaze-direction in social interaction</article-title>. <source>Acta Psychologica</source> <volume>26</volume>. <fpage>22</fpage>&#8211;<lpage>63</lpage>. DOI: <pub-id pub-id-type="doi">10.1016/0001-6918(67)90005-4</pub-id></mixed-citation></ref>
<ref id="B71"><mixed-citation publication-type="book"><string-name><surname>Kendon</surname>, <given-names>Adam</given-names></string-name>. <year>2004</year>. <source>Gesture: Visible action as utterance</source>. <publisher-loc>Cambridge</publisher-loc>: <publisher-name>Cambridge University Press</publisher-name>. DOI: <pub-id pub-id-type="doi">10.1017/CBO9780511807572</pub-id></mixed-citation></ref>
<ref id="B72"><mixed-citation publication-type="journal"><string-name><surname>Kendrick</surname>, <given-names>Kobin H.</given-names></string-name> &amp; <string-name><surname>Holler</surname>, <given-names>Judith</given-names></string-name>. <year>2017</year>. <article-title>Gaze direction signals response preference in conversation</article-title>. <source>Research on Language and Social Interaction</source> <volume>50</volume>(<issue>1</issue>). <fpage>12</fpage>&#8211;<lpage>32</lpage>. DOI: <pub-id pub-id-type="doi">10.1080/08351813.2017.1262120</pub-id></mixed-citation></ref>
<ref id="B73"><mixed-citation publication-type="journal"><string-name><surname>Kendrick</surname>, <given-names>Kobin H.</given-names></string-name> &amp; <string-name><surname>Holler</surname>, <given-names>Judith</given-names></string-name> &amp; <string-name><surname>Levinson</surname>, <given-names>Stephen</given-names></string-name>. <year>2023</year>. <article-title>Turn-taking in human face-to-face interaction is multimodal: Gaze direction and manual gestures aid the coordination of turn transitions</article-title>. <source>Philosophical Transactions of the Royal Society B: Biological Sciences</source> <volume>378</volume>(<issue>1875</issue>). <elocation-id>20210473</elocation-id>. DOI: <pub-id pub-id-type="doi">10.1098/rstb.2021.0473</pub-id></mixed-citation></ref>
<ref id="B74"><mixed-citation publication-type="webpage"><string-name><surname>Kolde</surname>, <given-names>Raivo</given-names></string-name>. <year>2019</year>. <chapter-title>pheatmap: Pretty heatmaps. R-package</chapter-title>. <uri>https://cran.r-project.org/web/packages/pheatmap/index.html</uri></mixed-citation></ref>
<ref id="B75"><mixed-citation publication-type="journal"><string-name><surname>Konrad</surname>, <given-names>Reiner</given-names></string-name> &amp; <string-name><surname>Hanke</surname>, <given-names>Thomas</given-names></string-name> &amp; <string-name><surname>Langer</surname>, <given-names>Gabriele</given-names></string-name> &amp; <string-name><surname>Blanck</surname>, <given-names>Dolly</given-names></string-name> &amp; <string-name><surname>Bleicken</surname>, <given-names>Julian</given-names></string-name> &amp; <string-name><surname>Hofmann</surname>, <given-names>Ilona</given-names></string-name> &amp; <string-name><surname>Jeziorski</surname>, <given-names>Olga</given-names></string-name> &amp; <string-name><surname>K&#246;nig</surname>, <given-names>Lutz</given-names></string-name> &amp; <string-name><surname>K&#246;nig</surname>, <given-names>Susanne</given-names></string-name> &amp; <string-name><surname>Nishio</surname>, <given-names>Rie</given-names></string-name> &amp; <string-name><surname>Regen</surname>, <given-names>Anja</given-names></string-name> &amp; <string-name><surname>Salden</surname>, <given-names>Uta</given-names></string-name> &amp; <string-name><surname>Wagner</surname>, <given-names>Sven</given-names></string-name> &amp; <string-name><surname>Worseck</surname>, <given-names>Satu</given-names></string-name> &amp; <string-name><surname>Schulder</surname>, <given-names>Marc</given-names></string-name>. <year>2020</year>. <article-title>MY DGS &#8211; annotated</article-title>. Public Corpus of German Sign Language, 3rd release. DOI: <pub-id pub-id-type="doi">10.25592/dgs.corpus-3.0</pub-id></mixed-citation></ref>
<ref id="B76"><mixed-citation publication-type="journal"><string-name><surname>Koole</surname>, <given-names>Tom</given-names></string-name> &amp; <string-name><surname>Gosen</surname>, <given-names>Myrte N.</given-names></string-name> <year>2024</year>. <article-title>Scopes of recipiency: An organization of responses to informings</article-title>. <source>Journal of Pragmatics</source> <volume>222</volume>. <fpage>25</fpage>&#8211;<lpage>39</lpage>. DOI: <pub-id pub-id-type="doi">10.1016/j.pragma.2024.01.004</pub-id></mixed-citation></ref>
<ref id="B77"><mixed-citation publication-type="journal"><string-name><surname>Landis</surname>, <given-names>J. Richard</given-names></string-name> &amp; <string-name><surname>Koch</surname>, <given-names>Gary G.</given-names></string-name> <year>1977</year>. <article-title>The measurement of observer agreement for categorical data</article-title>. <source>Biometrics</source> <volume>33</volume>(<issue>1</issue>). <elocation-id>159</elocation-id>. DOI: <pub-id pub-id-type="doi">10.2307/2529310</pub-id></mixed-citation></ref>
<ref id="B78"><mixed-citation publication-type="journal"><string-name><surname>Lepeut</surname>, <given-names>Alysson</given-names></string-name> &amp; <string-name><surname>Shaw</surname>, <given-names>Emily</given-names></string-name>. <year>2022</year>. <article-title>Time is ripe to make interactional moves: Bringing evidence from four languages across modalities</article-title>. <source>Frontiers in Communication</source> <volume>7</volume>. <elocation-id>780124</elocation-id>. DOI: <pub-id pub-id-type="doi">10.3389/fcomm.2022.780124</pub-id></mixed-citation></ref>
<ref id="B79"><mixed-citation publication-type="journal"><string-name><surname>Levinson</surname>, <given-names>Stephen</given-names></string-name>. <year>2015</year>. <article-title>Other-initiated repair in Y&#233;l&#238; Dnye: Seeing eye-to-eye in the language of Rossel Island</article-title>. <source>Open Linguistics</source> <volume>1</volume>(<issue>1</issue>). <fpage>386</fpage>&#8211;<lpage>410</lpage>. DOI: <pub-id pub-id-type="doi">10.1515/opli-2015-0009</pub-id>.</mixed-citation></ref>
<ref id="B80"><mixed-citation publication-type="webpage"><string-name><surname>Lindblad</surname>, <given-names>Gustaf</given-names></string-name> &amp; <string-name><surname>Allwood</surname>, <given-names>Jens</given-names></string-name>. <year>2015</year>. <article-title>Multimodal communicative feedback in Swedish</article-title>. In <source>Proceedings of the 2nd European and the 5th Nordic symposium on multimodal communication</source>, <fpage>53</fpage>&#8211;<lpage>59</lpage>. <uri>https://ep.liu.se/ecp/110/008/ecp15110008.pdf</uri>.</mixed-citation></ref>
<ref id="B81"><mixed-citation publication-type="journal"><string-name><surname>Loos</surname>, <given-names>Cornelia</given-names></string-name> &amp; <string-name><surname>Steinbach</surname>, <given-names>Markus</given-names></string-name> &amp; <string-name><surname>Repp</surname>, <given-names>Sophie</given-names></string-name>. <year>2024</year>. <article-title>Polar response strategies across modalities: Evidence from German Sign Language (DGS)</article-title>. <source>Language</source> <volume>100</volume>(<issue>3</issue>). <fpage>433</fpage>&#8211;<lpage>467</lpage>. DOI: <pub-id pub-id-type="doi">10.1353/lan.2024.a937185</pub-id></mixed-citation></ref>
<ref id="B82"><mixed-citation publication-type="journal"><string-name><surname>Lutzenberger</surname>, <given-names>Hannah</given-names></string-name> &amp; <string-name><surname>Wael</surname>, <given-names>Lierin De</given-names></string-name> &amp; <string-name><surname>Omardeen</surname>, <given-names>Rehana</given-names></string-name> &amp; <string-name><surname>Dingemanse</surname>, <given-names>Mark</given-names></string-name>. <year>2024</year>. <article-title>Interactional infrastructure across modalities: A comparison of repair initiators and continuers in British Sign Language and British English</article-title>. <source>Sign Language Studies</source> <volume>24</volume>(<issue>3</issue>). <fpage>548</fpage>&#8211;<lpage>581</lpage>. DOI: <pub-id pub-id-type="doi">10.1353/sls.2024.a928056</pub-id></mixed-citation></ref>
<ref id="B83"><mixed-citation publication-type="journal"><string-name><surname>Malisz</surname>, <given-names>Zofia</given-names></string-name> &amp; <string-name><surname>W&#322;odarczak</surname>, <given-names>Marcin</given-names></string-name> &amp; <string-name><surname>Buschmeier</surname>, <given-names>Hendrik</given-names></string-name> &amp; <string-name><surname>Skubisz</surname>, <given-names>Joanna</given-names></string-name> &amp; <string-name><surname>Kopp</surname>, <given-names>Stefan</given-names></string-name> &amp; <string-name><surname>Wagner</surname>, <given-names>Petra</given-names></string-name>. <year>2016</year>. <article-title>The ALICO corpus: Analysing the active listener</article-title>. <source>Language Resources and Evaluation</source> <volume>50</volume>, <fpage>411</fpage>&#8211;<lpage>442</lpage>. DOI: <pub-id pub-id-type="doi">10.1007/s10579-016-9355-6</pub-id></mixed-citation></ref>
<ref id="B84"><mixed-citation publication-type="journal"><string-name><surname>Manrique</surname>, <given-names>Elizabeth</given-names></string-name>. <year>2016</year>. <article-title>Other-initiated repair in Argentine Sign Language</article-title>. <source>Open Linguistics</source> <volume>2</volume>(<issue>1</issue>). <fpage>1</fpage>&#8211;<lpage>34</lpage>. DOI: <pub-id pub-id-type="doi">10.1515/opli-2016-0001</pub-id></mixed-citation></ref>
<ref id="B85"><mixed-citation publication-type="journal"><string-name><surname>Manrique</surname>, <given-names>Elizabeth</given-names></string-name> &amp; <string-name><surname>Enfield</surname>, <given-names>Nick</given-names></string-name>. <year>2015</year>. <article-title>Suspending the next turn as a form of repair initiation: Evidence from Argentine Sign Language</article-title>. <source>Frontiers in Psychology</source> <volume>6</volume>. <elocation-id>1326</elocation-id>. DOI: <pub-id pub-id-type="doi">10.3389/fpsyg.2015.01326</pub-id></mixed-citation></ref>
<ref id="B86"><mixed-citation publication-type="journal"><string-name><surname>Marmorstein</surname>, <given-names>Michal</given-names></string-name> &amp; <string-name><surname>Szczepek Reed</surname>, <given-names>Beatrice</given-names></string-name>. <year>2023</year>. <article-title>Newsmarks as an interactional resource for indexing remarkability: A qualitative analysis of Arabic wa&#7735;&#7735;&#257;hi and English really</article-title>. <source>Contrastive Pragmatics</source> <volume>5</volume>(<issue>1&#8211;2</issue>). <fpage>238</fpage>&#8211;<lpage>273</lpage>. DOI: <pub-id pub-id-type="doi">10.1163/26660393-bja10091</pub-id></mixed-citation></ref>
<ref id="B87"><mixed-citation publication-type="journal"><string-name><surname>Maynard</surname>, <given-names>Senko K.</given-names></string-name> <year>1990</year>. <article-title>Conversation management in contrast: Listener response in Japanese and American English</article-title>. <source>Journal of Pragmatics</source> <volume>14</volume>(<issue>3</issue>). <fpage>397</fpage>&#8211;<lpage>412</lpage>. DOI: <pub-id pub-id-type="doi">10.1016/0378-2166(90)90097-W</pub-id></mixed-citation></ref>
<ref id="B88"><mixed-citation publication-type="journal"><string-name><surname>McCarthy</surname>, <given-names>Michael</given-names></string-name>. <year>2003</year>. <article-title>Talking back: &#8220;Small&#8221; interactional response tokens in everyday conversation</article-title>. <source>Research on Language &amp; Social Interaction</source> <volume>36</volume>(<issue>1</issue>). <fpage>33</fpage>&#8211;<lpage>63</lpage>. DOI: <pub-id pub-id-type="doi">10.1207/S15327973RLSI3601_3</pub-id></mixed-citation></ref>
<ref id="B89"><mixed-citation publication-type="journal"><string-name><surname>Mesch</surname>, <given-names>Johanna</given-names></string-name>. <year>2016</year>. <article-title>Manual backchannel responses in signers&#8217; conversations in Swedish Sign Language</article-title>. <source>Language &amp; Communication</source> <volume>50</volume>. <fpage>22</fpage>&#8211;<lpage>41</lpage>. DOI: <pub-id pub-id-type="doi">10.1016/j.langcom.2016.08.011</pub-id></mixed-citation></ref>
<ref id="B90"><mixed-citation publication-type="journal"><string-name><surname>Mol</surname>, <given-names>Lisette</given-names></string-name> &amp; <string-name><surname>Krahmer</surname>, <given-names>Emiel</given-names></string-name> &amp; <string-name><surname>Maes</surname>, <given-names>Alfons</given-names></string-name> &amp; <string-name><surname>Swerts</surname>, <given-names>Marc</given-names></string-name>. <year>2011</year>. <article-title>Seeing and being seen: The effects on gesture production</article-title>. <source>Journal of Computer-Mediated Communication</source> <volume>17</volume>(<issue>1</issue>). <fpage>77</fpage>&#8211;<lpage>100</lpage>. DOI: <pub-id pub-id-type="doi">10.1111/j.1083-6101.2011.01558.x</pub-id></mixed-citation></ref>
<ref id="B91"><mixed-citation publication-type="journal"><string-name><surname>Mondada</surname>, <given-names>Lorenza</given-names></string-name>. <year>2016</year>. <article-title>Challenges of multimodality: Language and the body in social interaction</article-title>. <source>Journal of Sociolinguistics</source> <volume>20</volume>(<issue>3</issue>). <fpage>336</fpage>&#8211;<lpage>366</lpage>. DOI: <pub-id pub-id-type="doi">10.1111/josl.1_12177</pub-id></mixed-citation></ref>
<ref id="B92"><mixed-citation publication-type="webpage"><string-name><surname>Mori</surname>, <given-names>Taiga</given-names></string-name> &amp; <string-name><surname>Jokinen</surname>, <given-names>Kristiina</given-names></string-name> &amp; <string-name><surname>Den</surname>, <given-names>Yasuharu</given-names></string-name>. <year>2022</year>. <chapter-title>Cognitive states and types of nods</chapter-title>. In <string-name><surname>Paggio</surname>, <given-names>Patrizia</given-names></string-name> &amp; <string-name><surname>Gatt</surname>, <given-names>Albert</given-names></string-name> &amp; <string-name><surname>Tanti</surname>, <given-names>Marc</given-names></string-name> (eds.), <source>Proceedings of the 2nd workshop on people in vision, language, and the mind</source>, <fpage>17</fpage>&#8211;<lpage>25</lpage>. <publisher-loc>Marseille, France</publisher-loc>: <publisher-name>European Language Resources Association</publisher-name>. <uri>https://aclanthology.org/2022.pvlam-1.4/</uri>.</mixed-citation></ref>
<ref id="B93"><mixed-citation publication-type="book"><string-name><surname>Navarretta</surname>, <given-names>Costanza</given-names></string-name> &amp; <string-name><surname>Paggio</surname>, <given-names>Patrizia</given-names></string-name>. <year>2010</year>. <chapter-title>Classification of feedback expressions in multimodal data</chapter-title>. In <source>Annual meeting of the association for computational linguistics, 48th ACL</source>, <fpage>318</fpage>&#8211;<lpage>324</lpage>. <publisher-loc>Uppsala, Sweden</publisher-loc>: <publisher-name>Association for Computational Linguistics</publisher-name>.</mixed-citation></ref>
<ref id="B94"><mixed-citation publication-type="book"><string-name><surname>Navarretta</surname>, <given-names>Costanza</given-names></string-name> &amp; <string-name><surname>Paggio</surname>, <given-names>Patrizia</given-names></string-name>. <year>2012</year>. <chapter-title>Multimodal behaviour and feedback in different types of interaction</chapter-title>. In <source>Proceedings of the eighth international conference on language resources and evaluation (LREC)</source>, <fpage>2338</fpage>&#8211;<lpage>2342</lpage>. <publisher-loc>Istanbul, Turkey</publisher-loc>: <publisher-name>European Language Resources Association (ELRA)</publisher-name>.</mixed-citation></ref>
<ref id="B95"><mixed-citation publication-type="thesis"><string-name><surname>Omardeen</surname>, <given-names>Rehana</given-names></string-name>. <year>2023</year>. <source>Providence Island Sign Language in interaction</source>: <publisher-name>Georg-August-University G&#246;ttingen</publisher-name> PhD Thesis. DOI: <pub-id pub-id-type="doi">10.53846/goediss-10243</pub-id></mixed-citation></ref>
<ref id="B96"><mixed-citation publication-type="journal"><string-name><surname>Oomen</surname>, <given-names>Marloes</given-names></string-name> &amp; <string-name><surname>Roelofsen</surname>, <given-names>Floris</given-names></string-name>. <year>2023</year>. <article-title>Biased polar question forms in Sign Language of the Netherlands (NGT)</article-title>. <source>FEAST. Formal and Experimental Advances in Sign language Theory</source> <volume>5</volume>, <fpage>156</fpage>&#8211;<lpage>168</lpage>. DOI: <pub-id pub-id-type="doi">10.31009/FEAST.i5.13</pub-id>.</mixed-citation></ref>
<ref id="B97"><mixed-citation publication-type="journal"><string-name><surname>&#214;zy&#252;rek</surname>, <given-names>Asl&#305;.</given-names></string-name> <year>2021</year>. <article-title>Considering the nature of multimodal language from a crosslinguistic perspective</article-title>. <source>Journal of Cognition</source> <volume>4</volume>(<issue>1</issue>). <elocation-id>42</elocation-id>. DOI: <pub-id pub-id-type="doi">10.5334/joc.165</pub-id></mixed-citation></ref>
<ref id="B98"><mixed-citation publication-type="journal"><string-name><surname>Perniss</surname>, <given-names>Pamela</given-names></string-name>. <year>2018</year>. <article-title>Why we should study multimodal language</article-title>. <source>Frontiers in Psychology</source> <volume>9</volume>. <elocation-id>1109</elocation-id>. DOI: <pub-id pub-id-type="doi">10.3389/fpsyg.2018.01109</pub-id></mixed-citation></ref>
<ref id="B99"><mixed-citation publication-type="journal"><string-name><surname>Puupponen</surname>, <given-names>Anna</given-names></string-name>. <year>2019</year>. <article-title>Towards understanding nonmanuality: A semiotic treatment of signers&#8217; head movements</article-title>. <source>Glossa</source> <volume>4</volume>(<issue>1</issue>). <elocation-id>39</elocation-id>. DOI: <pub-id pub-id-type="doi">10.5334/gjgl.709</pub-id></mixed-citation></ref>
<ref id="B100"><mixed-citation publication-type="webpage"><collab>R Core Team</collab>. <year>2025</year>. <chapter-title>R: A language and environment for statistical computing</chapter-title>. <uri>https://www.R-project.org/</uri>.</mixed-citation></ref>
<ref id="B101"><mixed-citation publication-type="journal"><string-name><surname>Rasenberg</surname>, <given-names>Marlou</given-names></string-name> &amp; <string-name><surname>Pouw</surname>, <given-names>Wim</given-names></string-name> &amp; <string-name><surname>&#214;zy&#252;rek</surname>, <given-names>Asl&#305;</given-names></string-name> &amp; <string-name><surname>Dingemanse</surname>, <given-names>Mark</given-names></string-name>. <year>2022</year>. <article-title>The multimodal nature of communicative efficiency in social interaction</article-title>. <source>Scientific Reports</source> <volume>12</volume>(<issue>1</issue>). <elocation-id>19111</elocation-id>. DOI: <pub-id pub-id-type="doi">10.1038/s41598-022-22883-w</pub-id></mixed-citation></ref>
<ref id="B102"><mixed-citation publication-type="journal"><string-name><surname>Rasenberg</surname>, <given-names>Marlou</given-names></string-name> &amp; <string-name><surname>&#214;zy&#252;rek</surname>, <given-names>Asl&#305;</given-names></string-name> &amp; <string-name><surname>Dingemanse</surname>, <given-names>Mark</given-names></string-name>. <year>2020</year>. <article-title>Alignment in multimodal interaction: An integrative framework</article-title>. <source>Cognitive Science</source> <volume>44</volume>. <elocation-id>e12911</elocation-id>. DOI: <pub-id pub-id-type="doi">10.1111/cogs.12911</pub-id></mixed-citation></ref>
<ref id="B103"><mixed-citation publication-type="book"><string-name><surname>Rossano</surname>, <given-names>Federico</given-names></string-name> &amp; <string-name><surname>Brown</surname>, <given-names>Penelope</given-names></string-name> &amp; <string-name><surname>Levinson</surname>, <given-names>Stephen</given-names></string-name>. <year>2009</year>. <chapter-title>Gaze, questioning, and culture</chapter-title>. In <string-name><surname>Sidnell</surname>, <given-names>Jack</given-names></string-name> (ed.), <source>Conversation Analysis: Comparative perspectives</source> (Studies in Interactional Sociolinguistics), <fpage>187</fpage>&#8211;<lpage>249</lpage>. <publisher-loc>Cambridge</publisher-loc>: <publisher-name>Cambridge University Press</publisher-name>. DOI: <pub-id pub-id-type="doi">10.1017/CBO9780511635670.008</pub-id></mixed-citation></ref>
<ref id="B104"><mixed-citation publication-type="journal"><string-name><surname>Sacks</surname>, <given-names>Harvey</given-names></string-name> &amp; <string-name><surname>Schegloff</surname>, <given-names>Emanuel A.</given-names></string-name> &amp; <string-name><surname>Jefferson</surname>, <given-names>Gail</given-names></string-name>. <year>1974</year>. <article-title>A simplest systematics for the organization of turn-taking for conversation</article-title>. <source>Language</source> <volume>50</volume>(<issue>4</issue>). <fpage>696</fpage>&#8211;<lpage>735</lpage>. DOI: <pub-id pub-id-type="doi">10.2307/412243</pub-id></mixed-citation></ref>
<ref id="B105"><mixed-citation publication-type="journal"><string-name><surname>Safar</surname>, <given-names>Josefina</given-names></string-name> &amp; <string-name><surname>De Vos</surname>, <given-names>Connie</given-names></string-name>. <year>2022</year>. <article-title>Pragmatic competence without a language model: Other-initiated repair in Balinese homesign</article-title>. <source>Journal of Pragmatics</source> <volume>202</volume>. <fpage>105</fpage>&#8211;<lpage>125</lpage>. DOI: <pub-id pub-id-type="doi">10.1016/j.pragma.2022.10.017</pub-id></mixed-citation></ref>
<ref id="B106"><mixed-citation publication-type="journal"><string-name><surname>Sandler</surname>, <given-names>Wendy</given-names></string-name>. <year>2024</year>. <article-title>Speech and sign: The whole human language</article-title>. <source>Theoretical Linguistics</source> <volume>50</volume>(<issue>1&#8211;2</issue>). <fpage>107</fpage>&#8211;<lpage>124</lpage>. DOI: <pub-id pub-id-type="doi">10.1515/tl-2024-2008</pub-id></mixed-citation></ref>
<ref id="B107"><mixed-citation publication-type="journal"><string-name><surname>Sbranna</surname>, <given-names>Simona</given-names></string-name> &amp; <string-name><surname>M&#246;king</surname>, <given-names>Eduardo</given-names></string-name> &amp; <string-name><surname>Wehrle</surname>, <given-names>Simon</given-names></string-name> &amp; <string-name><surname>Grice</surname>, <given-names>Martine</given-names></string-name>. <year>2022</year>. <article-title>Backchannelling across languages: Rate, lexical choice and intonation in L1 Italian, L1 German and L2 German</article-title>. In <source>Proc. speech prosody</source>, <fpage>734</fpage>&#8211;<lpage>738</lpage>. DOI: <pub-id pub-id-type="doi">10.21437/SpeechProsody.2022-149</pub-id></mixed-citation></ref>
<ref id="B108"><mixed-citation publication-type="journal"><string-name><surname>Schegloff</surname>, <given-names>Emanuel A.</given-names></string-name> <year>1968</year>. <article-title>Sequencing in conversational openings</article-title>. <source>American Anthropologist</source> <volume>70</volume>(<issue>6</issue>). <fpage>1075</fpage>&#8211;<lpage>1095</lpage>. DOI: <pub-id pub-id-type="doi">10.1525/aa.1968.70.6.02a00030</pub-id></mixed-citation></ref>
<ref id="B109"><mixed-citation publication-type="book"><string-name><surname>Schegloff</surname>, <given-names>Emanuel A.</given-names></string-name> <year>1982</year>. <chapter-title>Discourse as an interactional achievement: Some uses of &#8216;uh huh&#8217; and other things that come between sentences</chapter-title>. In <string-name><surname>Tannen</surname>, <given-names>Deborah</given-names></string-name> (ed.), <source>Analyzing discourse: Text and talk</source>, <fpage>71</fpage>&#8211;<lpage>93</lpage>. <publisher-loc>Washington, D.C.</publisher-loc>: <publisher-name>Georgetown University Press</publisher-name>.</mixed-citation></ref>
<ref id="B110"><mixed-citation publication-type="webpage"><string-name><surname>Schloerke</surname>, <given-names>Barret</given-names></string-name> &amp; <string-name><surname>Cook</surname>, <given-names>Di</given-names></string-name> &amp; <string-name><surname>Larmarange</surname>, <given-names>Joseph</given-names></string-name> &amp; <string-name><surname>Briatte</surname>, <given-names>Francois</given-names></string-name> &amp; <string-name><surname>Marbach</surname>, <given-names>Moritz</given-names></string-name> &amp; <string-name><surname>Thoen</surname>, <given-names>Edwin</given-names></string-name> &amp; <string-name><surname>Elberg</surname>, <given-names>Amos</given-names></string-name> &amp; <string-name><surname>Toomet</surname>, <given-names>Ott</given-names></string-name> &amp; <string-name><surname>Crowley</surname>, <given-names>Jason</given-names></string-name> &amp; <string-name><surname>Hofmann</surname>, <given-names>Heike</given-names></string-name> &amp; <string-name><surname>Wickham</surname>, <given-names>Hadley</given-names></string-name>. <year>2024</year>. <chapter-title>GGally: Extension to &#8216;ggplot2&#8217;</chapter-title>. <uri>https://cran.r-project.org/web/packages/GGally/index.html</uri>.</mixed-citation></ref>
<ref id="B111"><mixed-citation publication-type="book"><string-name><surname>Schulder</surname>, <given-names>Marc</given-names></string-name> &amp; <string-name><surname>Hanke</surname>, <given-names>Thomas</given-names></string-name>. <year>2022</year>. <chapter-title>How to be FAIR when you CARE: The DGS Corpus as a case study of open science resources for minority languages</chapter-title>. In <string-name><surname>Calzolari</surname>, <given-names>Nicoletta</given-names></string-name> &amp; <string-name><surname>B&#233;chet</surname>, <given-names>Fr&#233;d&#233;ric</given-names></string-name> &amp; <string-name><surname>Blache</surname>, <given-names>Philippe</given-names></string-name> &amp; <string-name><surname>Choukri</surname>, <given-names>Khalid</given-names></string-name> &amp; <string-name><surname>Cieri</surname>, <given-names>Christopher</given-names></string-name> &amp; <string-name><surname>Declerck</surname>, <given-names>Thierry</given-names></string-name> &amp; <string-name><surname>Goggi</surname>, <given-names>Sara</given-names></string-name> &amp; <string-name><surname>Isahara</surname>, <given-names>Hitoshi</given-names></string-name> &amp; <string-name><surname>Maegaard</surname>, <given-names>Bente</given-names></string-name> &amp; <string-name><surname>Mariani</surname>, <given-names>Joseph</given-names></string-name> &amp; <string-name><surname>Mazo</surname>, <given-names>H&#233;l&#232;ne</given-names></string-name> &amp; <string-name><surname>Odijk</surname>, <given-names>Jan</given-names></string-name> &amp; <string-name><surname>Piperidis</surname>, <given-names>Stelios</given-names></string-name> (eds.), <source>Proceedings of the thirteenth language resources and evaluation conference</source>, <fpage>164</fpage>&#8211;<lpage>173</lpage>. <publisher-loc>Marseille, France</publisher-loc>: <publisher-name>European Language Resources Association (ELRA)</publisher-name>.</mixed-citation></ref>
<ref id="B112"><mixed-citation publication-type="journal"><string-name><surname>Selting</surname>, <given-names>Margret</given-names></string-name> &amp; <string-name><surname>Auer</surname>, <given-names>Peter</given-names></string-name> &amp; <string-name><surname>Barth-Weingarten</surname>, <given-names>Dagmar</given-names></string-name> &amp; <string-name><surname>Bergmann</surname>, <given-names>J&#246;rg</given-names></string-name> &amp; <string-name><surname>Bergmann</surname>, <given-names>Pia</given-names></string-name> &amp; <string-name><surname>Birkner</surname>, <given-names>Karin</given-names></string-name> &amp; <string-name><surname>Couper-Kuhlen</surname>, <given-names>Elizabeth</given-names></string-name> &amp; <string-name><surname>Deppermann</surname>, <given-names>Arnulf</given-names></string-name> &amp; <string-name><surname>Gilles</surname>, <given-names>Peter</given-names></string-name> &amp; <string-name><surname>G&#252;nthner</surname>, <given-names>Susanne</given-names></string-name> &amp; <string-name><surname>Hartung</surname>, <given-names>Martin</given-names></string-name> &amp; <string-name><surname>Kern</surname>, <given-names>Friederike</given-names></string-name> &amp; <string-name><surname>Mertzlufft</surname>, <given-names>Christine</given-names></string-name> &amp; <string-name><surname>Meyer</surname>, <given-names>Christian</given-names></string-name> &amp; <string-name><surname>Morek</surname>, <given-names>Miriam</given-names></string-name> &amp; <string-name><surname>Oberzaucher</surname>, <given-names>Frank</given-names></string-name> &amp; <string-name><surname>Peters</surname>, <given-names>J&#246;rg</given-names></string-name> &amp; <string-name><surname>Quasthoff</surname>, <given-names>Uta</given-names></string-name> &amp; <string-name><surname>Sch&#252;tte</surname>, <given-names>Wilfried</given-names></string-name> &amp; <string-name><surname>Stukenbrock</surname>, <given-names>Anja</given-names></string-name> &amp; <string-name><surname>Uhmann</surname>, <given-names>Susanne</given-names></string-name>. <year>2009</year>. <article-title>Gespr&#228;chsanalytisches Transkriptionssystem 2 (GAT 2)</article-title>. <source>Gespr&#228;chsforschung &#8211; Online-Zeitschrift zur verbalen Interaktion</source> <volume>10</volume>. <fpage>353</fpage>&#8211;<lpage>402</lpage>.</mixed-citation></ref>
<ref id="B113"><mixed-citation publication-type="journal"><string-name><surname>Simon</surname>, <given-names>Carsta</given-names></string-name>. <year>2018</year>. <article-title>The functions of active listening responses</article-title>. <source>Behavioural Processes</source> <volume>157</volume>. <fpage>47</fpage>&#8211;<lpage>53</lpage>. DOI: <pub-id pub-id-type="doi">10.1016/j.beproc.2018.08.013</pub-id></mixed-citation></ref>
<ref id="B114"><mixed-citation publication-type="journal"><string-name><surname>Skedsmo</surname>, <given-names>Kristian</given-names></string-name>. <year>2020</year>. <article-title>Other-initiations of repair in Norwegian Sign Language</article-title>. <source>Social Interaction: Video-Based Studies of Human Sociality</source> <volume>3</volume>(<issue>2</issue>). DOI: <pub-id pub-id-type="doi">10.7146/si.v3i2.117723</pub-id></mixed-citation></ref>
<ref id="B115"><mixed-citation publication-type="journal"><string-name><surname>Skedsmo</surname>, <given-names>Kristian</given-names></string-name>. <year>2023</year>. <article-title>Repair receipts in Norwegian Sign Language multiperson conversation</article-title>. <source>Journal of Pragmatics</source> <volume>215</volume>. <fpage>189</fpage>&#8211;<lpage>212</lpage>. DOI: <pub-id pub-id-type="doi">10.1016/j.pragma.2023.07.015</pub-id></mixed-citation></ref>
<ref id="B116"><mixed-citation publication-type="journal"><string-name><surname>Stivers</surname>, <given-names>Tanya</given-names></string-name>. <year>2008</year>. <article-title>Stance, alignment, and affiliation during storytelling: When nodding is a token of affiliation</article-title>. <source>Research on Language &amp; Social Interaction</source> <volume>41</volume>(<issue>1</issue>). <fpage>31</fpage>&#8211;<lpage>57</lpage>. DOI: <pub-id pub-id-type="doi">10.1080/08351810701691123</pub-id></mixed-citation></ref>
<ref id="B117"><mixed-citation publication-type="journal"><string-name><surname>Stubbe</surname>, <given-names>Maria</given-names></string-name>. <year>1998</year>. <article-title>Are you listening? Cultural influences on the use of supportive verbal feedback in conversation</article-title>. <source>Journal of Pragmatics</source> <volume>29</volume>(<issue>3</issue>). <fpage>257</fpage>&#8211;<lpage>289</lpage>. DOI: <pub-id pub-id-type="doi">10.1016/S0378-2166(97)00042-8</pub-id></mixed-citation></ref>
<ref id="B118"><mixed-citation publication-type="journal"><string-name><surname>Tolins</surname>, <given-names>Jackson</given-names></string-name> &amp; <string-name><surname>Fox Tree</surname>, <given-names>Jean E.</given-names></string-name> <year>2014</year>. <article-title>Addressee backchannels steer narrative development</article-title>. <source>Journal of Pragmatics</source> <volume>70</volume>. <fpage>152</fpage>&#8211;<lpage>164</lpage>. DOI: <pub-id pub-id-type="doi">10.1016/j.pragma.2014.06.006</pub-id></mixed-citation></ref>
<ref id="B119"><mixed-citation publication-type="book"><string-name><surname>Tottie</surname>, <given-names>Gunnel</given-names></string-name>. <year>1991</year>. <chapter-title>Conversation style in British and American English: The case of backchannels</chapter-title>. In <string-name><surname>Aijmer</surname>, <given-names>Karin</given-names></string-name> &amp; <string-name><surname>Altenberg</surname>, <given-names>Bengt</given-names></string-name> (eds.), <source>English corpus linguistics: Studies in honour of Jan Svartvik</source>, <fpage>254</fpage>&#8211;<lpage>271</lpage>. <publisher-loc>London</publisher-loc>: <publisher-name>Longman</publisher-name>.</mixed-citation></ref>
<ref id="B120"><mixed-citation publication-type="journal"><string-name><surname>Trujillo</surname>, <given-names>James P.</given-names></string-name> &amp; <string-name><surname>Simanova</surname>, <given-names>Irina</given-names></string-name> &amp; <string-name><surname>Bekkering</surname>, <given-names>Harold</given-names></string-name> &amp; <string-name><surname>&#214;zy&#252;rek</surname>, <given-names>Asli</given-names></string-name>. <year>2018</year>. <article-title>Communicative intent modulates production and comprehension of actions and gestures: A Kinect study</article-title>. <source>Cognition</source> <volume>180</volume>. <fpage>38</fpage>&#8211;<lpage>51</lpage>. DOI: <pub-id pub-id-type="doi">10.1016/j.cognition.2018.04.003</pub-id></mixed-citation></ref>
<ref id="B121"><mixed-citation publication-type="journal"><string-name><surname>Trujillo</surname>, <given-names>James P.</given-names></string-name> &amp; <string-name><surname>Vaitonyte</surname>, <given-names>Julija</given-names></string-name> &amp; <string-name><surname>Simanova</surname>, <given-names>Irina</given-names></string-name> &amp; <string-name><surname>&#214;zy&#252;rek</surname>, <given-names>Asl&#305;.</given-names></string-name> <year>2019</year>. <article-title>Toward the markerless and automatic analysis of kinematic features: A toolkit for gesture and movement research</article-title>. <source>Behavior Research Methods</source> <volume>51</volume>(<issue>2</issue>). <fpage>769</fpage>&#8211;<lpage>777</lpage>. DOI: <pub-id pub-id-type="doi">10.3758/s13428-018-1086-8</pub-id></mixed-citation></ref>
<ref id="B122"><mixed-citation publication-type="journal"><string-name><surname>Truong</surname>, <given-names>Khiet P.</given-names></string-name> &amp; <string-name><surname>Poppe</surname>, <given-names>Ronald</given-names></string-name> &amp; <string-name><surname>Kok</surname>, <given-names>Iwan De</given-names></string-name> &amp; <string-name><surname>Heylen</surname>, <given-names>Dirk</given-names></string-name>. <year>2011</year>. <article-title>A multimodal analysis of vocal and visual backchannels in spontaneous dialogs</article-title>. In <source>Proc. Interspeech</source> <volume>2011</volume>, <fpage>2973</fpage>&#8211;<lpage>2976</lpage>. DOI: <pub-id pub-id-type="doi">10.21437/Interspeech.2011-744</pub-id></mixed-citation></ref>
<ref id="B123"><mixed-citation publication-type="book"><string-name><surname>Uhmann</surname>, <given-names>Susanne</given-names></string-name>. <year>1996</year>. <chapter-title>On rhythm in everyday German conversation: Beat clashes in assessment utterances</chapter-title>. In <string-name><surname>Couper-Kuhlen</surname>, <given-names>Elizabeth</given-names></string-name> &amp; <string-name><surname>Selting</surname>, <given-names>Margret</given-names></string-name> (eds.), <source>Prosody in conversation</source>, <fpage>303</fpage>&#8211;<lpage>365</lpage>. <publisher-loc>Cambridge</publisher-loc>: <publisher-name>Cambridge University Press</publisher-name>. DOI: <pub-id pub-id-type="doi">10.1017/CBO9780511597862.010</pub-id></mixed-citation></ref>
<ref id="B124"><mixed-citation publication-type="webpage"><string-name><surname>van Gijn</surname>, <given-names>Rik</given-names></string-name> &amp; <string-name><surname>Hirtzel</surname>, <given-names>Vincent</given-names></string-name> &amp; <string-name><surname>Gipper</surname>, <given-names>Sonja</given-names></string-name> &amp; <string-name><surname>Ballivi&#225;n Torrico</surname>, <given-names>Jerem&#237;as</given-names></string-name>. <year>2011</year>. <chapter-title>The Yurakar&#233; Archive</chapter-title>. <uri>https://hdl.handle.net/1839/8df587ed-3d6e-4db8-bfe5-4ecad5cef3a2</uri>.</mixed-citation></ref>
<ref id="B125"><mixed-citation publication-type="book"><string-name><surname>Vandenitte</surname>, <given-names>S&#233;bastien</given-names></string-name>. <year>2023</year>. <chapter-title>When referents are seen and heard: A comparative study of constructed action in the discourse of LSFB (French Belgian Sign Language) signers and Belgian French speakers</chapter-title>. In <string-name><surname>Gardelle</surname>, <given-names>Laure</given-names></string-name> &amp; <string-name><surname>Vincent-Durroux</surname>, <given-names>Laurence</given-names></string-name> &amp; <string-name><surname>Vinckel-Roisin</surname>, <given-names>H&#233;l&#232;ne</given-names></string-name> (eds.), <source>Reference: From conventions to pragmatics</source>, <fpage>127</fpage>&#8211;<lpage>149</lpage>. <publisher-loc>Amsterdam/Philadelphia</publisher-loc>: <publisher-name>John Benjamins</publisher-name>. DOI: <pub-id pub-id-type="doi">10.1075/slcs.228.07van</pub-id></mixed-citation></ref>
<ref id="B126"><mixed-citation publication-type="journal"><string-name><surname>Vigliocco</surname>, <given-names>Gabriella</given-names></string-name> &amp; <string-name><surname>Perniss</surname>, <given-names>Pamela</given-names></string-name> &amp; <string-name><surname>Vinson</surname>, <given-names>David</given-names></string-name>. <year>2014</year>. <article-title>Language as a multimodal phenomenon: Implications for language learning, processing and evolution</article-title>. <source>Philosophical Transactions of the Royal Society B: Biological Sciences</source> <volume>369</volume>(<issue>1651</issue>). <elocation-id>20130292</elocation-id>. DOI: <pub-id pub-id-type="doi">10.1098/rstb.2013.0292</pub-id></mixed-citation></ref>
<ref id="B127"><mixed-citation publication-type="webpage"><string-name><surname>White</surname>, <given-names>Sheida</given-names></string-name>. <year>1989</year>. <article-title>Backchannels across cultures: A study of Americans and Japanese</article-title>. <source>Language in Society</source> <volume>18</volume>(<issue>1</issue>). <fpage>59</fpage>&#8211;<lpage>76</lpage>. <uri>http://www.jstor.org/stable/4168001</uri>. DOI: <pub-id pub-id-type="doi">10.1017/S0047404500013270</pub-id></mixed-citation></ref>
<ref id="B128"><mixed-citation publication-type="book"><string-name><surname>Wickham</surname>, <given-names>Hadley</given-names></string-name>. <year>2016</year>. <source>ggplot2: Elegant graphics for data analysis</source>. <edition>2nd</edition> edition. <publisher-loc>Cham</publisher-loc>: <publisher-name>Springer</publisher-name>. DOI: <pub-id pub-id-type="doi">10.1007/978-3-319-24277-4</pub-id></mixed-citation></ref>
<ref id="B129"><mixed-citation publication-type="journal"><string-name><surname>Wiener</surname>, <given-names>Morton</given-names></string-name> &amp; <string-name><surname>Devoe</surname>, <given-names>Shannon</given-names></string-name>. <year>1974</year>. <article-title>Regulators, channels, and communication disruption</article-title>. Clark University unpublished research proposal.</mixed-citation></ref>
<ref id="B130"><mixed-citation publication-type="book"><string-name><surname>Wiltschko</surname>, <given-names>Martina</given-names></string-name>. <year>2021</year>. <source>The grammar of interactional language</source>. <publisher-loc>Cambridge</publisher-loc>: <publisher-name>Cambridge University Press</publisher-name>. DOI: <pub-id pub-id-type="doi">10.1017/9781108693707</pub-id></mixed-citation></ref>
<ref id="B131"><mixed-citation publication-type="book"><string-name><surname>Xu</surname>, <given-names>Jun</given-names></string-name>. <year>2016</year>. <source>Displaying recipiency: Reactive tokens in Mandarin task-oriented interaction</source>. <publisher-loc>Amsterdam/Philadelphia</publisher-loc>: <publisher-name>John Benjamins</publisher-name>. DOI: <pub-id pub-id-type="doi">10.1075/scld.6</pub-id></mixed-citation></ref>
<ref id="B132"><mixed-citation publication-type="book"><string-name><surname>Yngve</surname>, <given-names>Victor H.</given-names></string-name> <year>1970</year>. <chapter-title>On getting a word in edgewise</chapter-title>. In <source>Chicago Linguistics Society, 6th Meeting (CLS-70)</source>, <fpage>567</fpage>&#8211;<lpage>577</lpage>. <publisher-loc>Chicago, Illinois, USA</publisher-loc>: <publisher-name>University of Chicago</publisher-name>.</mixed-citation></ref>
<ref id="B133"><mixed-citation publication-type="journal"><string-name><surname>Zellers</surname>, <given-names>Margaret</given-names></string-name>. <year>2021</year>. <article-title>An overview of forms, functions, and configurations of backchannels in Ruruuli/Lunyala</article-title>. <source>Journal of Pragmatics</source> <volume>175</volume>. <fpage>38</fpage>&#8211;<lpage>52</lpage>. DOI: <pub-id pub-id-type="doi">10.1016/j.pragma.2021.01.012</pub-id></mixed-citation></ref>
</ref-list>
</back>
</article>