1 Introduction

Research on spoken language processing in hearing unimodal bilinguals has left little doubt as to the simultaneous activation of linguistic units, e.g., words or phonemes, from both languages during the processing of one language. This does not only hold for co-activation of words based on semantic overlap, but also for co-activation of words based on sub-lexical (phonological or orthographic) overlap (see Van Assche et al. 2012 for a recent review). The latter phenomenon, known as language non-selective lexical access, has been demonstrated in adult unimodal bilinguals (Spivey & Marian 1999; Marian et al. 2003; Schulpen et al. 2003; Ju & Luce 2004; Lagrou et al. 2011; Von Holzen & Mani 2014) and child unimodal bilinguals (Brenders et al. 2011; Von Holzen & Mani 2012; Poarch & Van Hell 2012) of varying proficiency in both their languages. It supports models of bilingual word recognition, which assume considerable cross-talk between the two languages of a bilingual at multiple stages of processing (e.g., the BIA+ model: Dijkstra et al. 1999; Dijkstra & van Heuven 2002). However, most research on language non-selective lexical access has focused on hearing unimodal bilinguals, i.e., in this context, bilinguals growing up with two spoken languages. Hearing or deaf people growing up with a sign language and a spoken language – i.e., bimodal bilinguals – constitute a special group of bilinguals whose two languages are produced and perceived in two different modalities with no sensomotoric overlap: the visual-manual modality and the auditory-oral modality. Hence, with the term bimodal bilinguals we refer to both deaf and hearing signers. By using the term deaf bimodal bilinguals, we want to highlight the fact that many deaf individuals are able to master a visual-manual sign language as well as an oral spoken language, although the input of the spoken language appears to them visually through written text and lip-reading (see Woll & MacSweeney 2016 for discussion).

In comparison to research with hearing unimodal bilinguals, research with deaf or hearing bimodal bilinguals enables the investigation of the modality-independent consequences of bilingualism on the structure, organization and processing of the language system (Emmorey et al. 2016). Against this background, the current study with congenitally deaf native signers examined the co-activation of L2 German words during the processing of sentences in L1 German Sign Language (Deutsche Gebärdensprache, DGS). Evidence that the less dominant, perceptually non-overlapping L2 words are co-activated during L1 sign processing would have important implications for models of bilingual word recognition with regard to the nature of cross-talk between two languages during processing, as we will explain in more detail below.

Psycho- and neurolinguistic research with bimodal bilinguals has long focused on hearing bimodal bilinguals (or children of deaf adults, Codas) (e.g., Emmorey et al. 2008; 2013; 2016; Shook & Marian 2009; Lillo-Martin et al. 2014). Given that this population has access to both language modalities from birth, spoken language acquisition in this population is more comparable – but not identical – to that of unimodal, non-signing bilinguals. In contrast, deaf bimodal bilinguals, being congenitally and profoundly deaf, form a special group who must overcome the challenge of learning a spoken language (L2) with very little or no phonological input. Nevertheless, deaf bilinguals not only acquire fluency in their native sign language (L1), but many are also able to achieve high levels of proficiency in their second (spoken) language (L2), using visual cues such as mouth representations of spoken words and orthographic representations of written words (cf. Marentette & Mayberry 2000; Baker & Woll 2008; Plaza-Pust & Weinmeister 2008; Lillo-Martin 2009). However, the two facts that there is little perceptual overlap between the two modalities of their two languages and that the input of the spoken language is reduced to visual cues, place additional cognitive demands on the acquisition of the second language. This raises doubt as to the strength of the connections between the two languages, and consequently, the possibility of both languages being simultaneously active during the processing of input in one of these languages.

1.1 Previous studies

Despite the absence of phonological overlap between lexical representations of signs and words, a number of studies suggest that the processing of a spoken or written word leads to the co-activation of the corresponding sign representation, i.e., cross-language co-activation from word to sign. With hearing bimodal bilinguals, Shook & Marian (2012) conducted a visual-world-paradigm experiment, showing that participants co-activated American Sign Language (ASL) signs when processing spoken English words: Participants looked longer at competitor items than at distractor items, when the ASL sign translation of the competitor overlapped phonologically with the ASL sign translation of the target word. The authors argue that sign translations of the target word were co-activated by top-down or lateral connections between word and sign representations. Similar results were provided by Giezen & Emmorey (2016). In a sign production task with interfering words, hearing bimodal bilinguals showed a facilitation in ASL sign production, if the interfering English words were phonologically related through sign translation (for results of hearing bimodal bilinguals in Spanish Sign Language (Lengua de Senãs Espanõla (LSE), see Villameriel et al. 2016).

By testing deaf bimodal bilinguals, Morford et al. (2011) similarly provide evidence for the co-activation of ASL signs (L1) during the processing of written English words (L2). In their experiment, deaf participants and a control group of hearing bilinguals without any knowledge of ASL were presented with two written words that either had phonologically similar sign translations or not and were asked to judge the semantic relatedness of the two words. The processing of the (overt) semantic overlap between written words was modulated by the (covert) phonological ASL overlap between the sign translations of the written words. Reaction time measurements showed that deaf participants were faster to indicate the semantic relatedness of words that also had phonologically similar sign translations compared to those semantically related words with unrelated sign translations. These results were the first to suggest that deaf bimodal bilinguals also activate L1 sign representations during L2 written word processing (Morford et al. 2011; 2014; for similar results in other sign languages, see Kubus et al. 2014 for DGS and Ormel et al. 2012 for NGT). Meade et al. (2017) present first ERP results with deaf bimodal bilinguals for implicit co-activation of ASL signs during single English word processing. Prime-target word pairs with phonologically related sign translations elicited reduced negativities compared to targets with unrelated sign translations.

While the results reviewed above shed important light on the co-activation of the dominant first language during second language processing, there is, as yet, little known about the co-activation of less dominant second language spoken/written word representations during the processing of L1 signs, i.e., cross-language co-activation from sign to word. This question is theoretically relevant as models of the bilingual lexicon differ in how they conceptualize inter-language connections. For instance, the Revised Hierarchical Model (RHM, Kroll & Stewart 1994) proposes that there are strong connections from L2 to L1 on the lexical level because second language processing is mediated by the first language. Thus, individuals at a relatively early stage of L2 learning process L2 items via corresponding L1 translations. Although the processing of L2 items via the corresponding L1 translation reduces with a more advanced level of L2 competence, lexical associations are assumed to be stronger for L2 to L1 items compared to L1 to L2 items. Indeed, studies with hearing unimodal bilinguals suggest that when tested in their L1, participants do not implicitly label objects in both their languages. In contrast, when tested in their L2, they do implicitly retrieve both language labels for visually presented objects (Spivey & Marian 1999; Marian & Spivey 2003a; b; Weber & Cutler 2004; Wu & Thierry 2011; but see Von Holzen & Mani 2014). Furthermore, studies examining the time course of translation priming suggest that translation priming from L1 to L2 follows a later time course relative to L2 to L1 (Alvarez et al. 2003).

The strength of connections from L1 to L2 may be further weakened in bimodal bilinguals, given that lexical items in their two languages do not share overt phonological form-based similarities and the two languages draw upon distinct sensomotoric systems. Thus, taken together, (a) the absence of perceptual overlap between the two languages and (b) the difference in the dominance of the two languages of congenitally deaf bilinguals question the possibility that processing signs routinely involves access to cross-language written word translations. However, were we to find evidence of cross-language co-activation from sign to word in deaf bimodal bilinguals, this would substantially advance our knowledge of bilingual activation models (like BIA+, Dijkstra & Heuven 2002; or BLINCS, Shook & Marian 2013), as the lack of overt phonological overlap implies a connective link between the two languages on a separate, non-phonological level (see also Thierry & Wu 2007). Indeed, Lee et al. (2019) present ERP evidence for co-activation of English words (L2) during the recognition of single ASL signs (L1) in deaf and hearing bimodal bilinguals. In a semantic relation task, deaf and hearing bimodal bilinguals had to judge the semantic relatedness of single sign pairs, half of which were semantically related. The other half was semantically unrelated. Half of the semantically unrelated pairs had an orthographic and phonological overlap in their English word translation (e.g. ‘bar’ and ‘star’). ERP results showed an N400 effect for the critical condition, i.e. pairs that are form-related in English. Interestingly, the ERP effect was reverse in deaf bimodal bilinguals compared to hearing bimodal bilinguals. For deaf signers, the critical condition elicited a lager N400 component compared to the non-critical condition (i.e. pairs with unrelated English translations). The authors associate these results with a difference in language dominance and an asymmetry in the reliance on orthographic and phonological representations between deaf and hearing bimodal bilinguals.

Against this background, the current ERP study examines whether congenitally deaf bimodal bilinguals co-activate L2 written word representations during the processing of L1 signs. It is worth noting that the current study – in contrast to all previous studies – focuses on cross-language co-activation from sign to word in the processing of signed sentences, rather than individual signs. Sentence contexts typically provide additional information that may allow the discourse partner to predict the upcoming input (Altmann & Kamide 1999; Mani & Huettig 2012; Hosemann et al. 2013) and are thus more ecologically valid as we usually perceive words in context and not in isolation. Furthermore, ERP studies of monolingual lexical recognition provide evidence of distinct neurophysiological responses to an item processed in a sentential context compared to an item processed as isolated single word. For example, the effect of the frequency of words is suppressed when they are presented in meaningful sentence contexts (Van Petten 1995; Bornkessel-Schlesewsky et al. 2016). On this account, we explicitly chose to present signs in sentence contexts. Nevertheless, given that this is the first study examining L2 (written word) activation during L1 (sign) processing in sentences, we avoid highly predictive sentence contexts to allow a preliminary assessment of the strength of L2 to L1 connections across modalities.

1.2 The present study

Congenitally deaf bilinguals watched signed DGS sentences in which two signs served as prime and target, respectively. Prime and target signs were either phonologically related within or across language or unrelated. In the within-language priming condition, prime and target signs were minimal pairs, overlapping in three out of four phonological parameters and differing in one, i.e., either in handshape, movement, location or orientation of the sign. Figure 1 presents video stills of the DGS signs STORE and ANIMAL that differ only in the path movement. Since prime and target signs form minimal pairs, differing in only one phonological parameter, processing of the target sign may be facilitated by prior retrieval of and access to the overlapping phonological parameters pre-activated by the prime.

Figure 1
Figure 1

Video stills of the DGS signs STORE (left) and ANIMAL (right). The distinctive parameter is the movement: STORE has a reduplicated up-and-down movement, while ANIMAL has an alternating back-and-forth movement.

In the cross-language priming condition, primes and targets were phonologically unrelated signs, but their German translations were phonological and orthographic minimal pairs. For example, the signs MOTHER and BUTTER have no phonological overlap in DGS, but their German translations differ in the onset grapheme and phoneme: <mutter>, [ˈmʊtɐ] – <butter>, [ˈbʊtɐ]. If processing L1 signs leads to co-activation of the corresponding L2 written word translations, we expect to find a priming effect for related cross-language prime-target pairs. Importantly, in the within-language condition, any effects of sign priming rely on (overt) sign-phonological overlap between prime and target and can be mediated by merely pre-lexical processes. In contrast, in the cross-language condition, any effects of priming necessarily rely on lexical level activation of the prime’s and the target’s L2 written word translations during L1 sign processing.

Most studies investigating phonological priming as reflected in ERP responses have focused on spoken languages (e.g.; Connolly & Phillips 1994; Praamstra et al. 1994; Dumay et al. 2001). However, Gutiérrez et al. (2012) compared prime-target pairs in LSE that either shared the phonological parameter location or the phonological parameter handshape with unrelated prime-target pairs. With such minimally overlapping prime-target pairs, ERP responses were more negative for related signs compared to unrelated signs, and only for location overlap. The authors interpret this effect on the basis of interactive activation models (e.g., McClelland & Elman 1986) that propose lateral inhibition at the lexical level during the processing of such minimally related signs.

While we might expect a similar interference effect in the processing of related signs within and, potentially also, across-language, our own research (and numerous other studies) with increased overlap between primes and targets (at least in spoken language) finds increased negative potentials timelocked to unrelated targets relative to related targets (Von Holzen & Mani 2014; see also Thierry & Wu 2007 and Van Hell & Kroll 2013 for a review).

Given the increased phonological overlap between signs in the within-language priming condition and the increased orthographical/phonological overlap between written word translations in the cross-language priming condition, we, therefore, predict increased negative deflections in brain activity to target signs preceded by unrelated prime signs compared to within-language related primes or cross-language related primes. In the case of within language prime-target pairs, this reduced negativity is expected due to greater ease in target processing given the increased (overt) phonological overlap between targets and primes, such that three of the four phonological parameters of the target may have been pre-activated during presentation of the prime. In the case of cross-language primes, this reduced negativity is expected due to greater ease in target processing due to pre-activation of the L2 translation of the target (via covert overlap with the L2 translation of the prime). Such a reduction in N400 to targets in cross-language related pairs would provide particularly strong support for co-activation of the L2 German translation equivalents of L1 signs during L1 sentence processing. We focus on two time windows that have previously been shown to be sensitive to the comparisons in the current study, namely, the time window from 300 to 400 ms and the time window from 450 to 650 ms. The earlier time window captures the typical N400 component (Kutas & Hillyard 1980; Kutas & Federmeier 2000), which has been shown to index processing of semantic information. We also analyze a slightly later time window given that a number of studies with unimodal bilinguals suggest that the onset of components like the N400 can be delayed in bilinguals, especially in conditions that include access to their L2 (e.g., Ardal et al. 1990; Moreno & Kutas 2005; Van Hell & Tokowicz 2010). Indeed, some studies suggest that this later time window may be more sensitive to translation effects especially when going from L1 to L2, suggesting that translation from L1 to L2 follows a later time course than translation from L2 to L1 (Phillips et al. 2006; Van Hell & Kroll 2013). In addition, previous studies on sign language processing have already shown that the temporal nature of signs – especially the relatively long transition phase between signs – may be responsible for a delayed N400 (e.g. Lee et al. 2019; see also Hosemann et al. 2013 for the impact of the temporal nature of signs on language processing). We will come back to this in more detail in the discussion section.

2 Materials and methods

2.1 Participants

Fifteen congenitally deaf native signers of DGS from all over Germany participated in the experiment as paid volunteers (9 male, 6 female). All participants were right-handed, had normal or corrected-to-normal vision and reported no history of neurological disorders. Their ages ranged from 24 to 48 years (mean 31.8; sd 7.41). All signers had deaf parents and DGS input before the age of three (Age of Acquisition (AoA) L1: 0–3; mean 0.83; sd 1.27). Based on a questionnaire (administered after the experiment), participants learned written German at an average age of 4½ years (AoA L2: 2–7; mean 4.53; sd 1.3). On a 1–10 scale, participants rated their proficiency in written German on average 6.73 (range 4–10; sd 1.28). Most of them regularly write German in work contexts, in emails and in chats. No further reading tests or German proficiency tests were used.

2.2 Materials

The materials were discussed, developed and video recorded in collaboration with two deaf DGS informants (one female, one male). The stimulus material consisted of a total of 160 sentences, 40 sentences per condition (a complete list of the stimulus sentences can be found in Appendix A). In the within-language priming condition (i.e., priming in DGS), prime and target signs formed minimal pairs overlapping in three phonological parameters and differing in only one: either in handshape, in the path movement, in location or in the orientation of the sign (cf. STORE – ANIMAL in Figure 1). Sentences in the within-language control condition were identical to their within-language priming counterparts, except for the prime, which was phonologically unrelated to the target in DGS as well as in its German translation (e.g., CHURCH – ANIMAL). In the cross-language priming condition (i.e., priming due to German translation equivalents), prime and target signs shared no overt phonological overlap, but their German translation equivalents were orthographic and phonological minimal pairs differing only in the onset grapheme/phoneme (MOTHER – BUTTER, ‘Mutter’ – ‘Butter’). Note that we did not further differentiate between orthographic and phonological overlap in the design of our stimulus set. Sentences in the cross-language control condition were identical to their priming counterparts, except for the prime sign, which was phonologically unrelated to the target sign in DGS; and the German translations of prime and target were no minimal pairs and did not share a rime (FATHER – BUTTER, ‘Vater’ – ‘Butter’). All sentences were semantically and grammatically correct sentences of DGS. Prime and target signs were semantically unrelated across all conditions. By changing prime signs, we could keep target signs identical across the related and the unrelated conditions. Any differences in target processing across primed and control conditions can thus be attributed to the differences in the degree and kind of overlap between primes and targets.

Test sentences, as in (1), started with a DGS typical beginning: either a topic construction (TOPIC SOCCER …), a temporal construction (THIS YEAR …), or a location (SUPERMARKET INDEX …). Sentence beginnings were followed by the prime, an intermediate index sign, the target, and a sentence final verb. Note that prime and target signs were nouns and that DGS is a verb-final language (Perniss et al. 2007; Leeson & Saeed 2012). IX is a so-called index or pointing sign, which is used for referential anchoring of locations within signing space (Steinbach & Onea 2016).

(1) a. within-language priming condition:
    DGS: USUALLY STORE IX ANIMAL ALLOWED-NEG VISIT
    German: Normalerweise sind in Geschäften keine Tiere erlaubt.
      ‘Usually, animals are not allowed to enter stores.’
  b. within-language control condition:
    DGS: USUALLY CHURCH IX ANIMAL ALLOWED-NEG VISIT
    German: Normalerweise sind in Kirchen keine Tiere erlaubt.
      ‘Usually, animals are not allowed to enter churches.’
  c. cross-language priming condition:
    DGS: REFRIGERATOR POSS1 MOTHER IX BUTTER TAKE-OUT
    German: Meine Mutter holt Butter aus dem Kühlschrank.
      ‘My mother takes out the butter from the refrigerator.’
  d. cross-language control condition:
    DGS: REFRIGERATOR POSS1 FATHER IX BUTTER TAKE-OUT
    German: Mein Vater holt Butter aus dem Kühlschrank.
      ‘My father takes out the butter from the refrigerator.’

Stimulus sentences were signed by a male deaf model and recorded with a HDR-XR 550E full-HD camera (25 frames/sec). The video editing software application Adobe Premiere Pro was used for cutting and editing the material. To ensure the naturalness of the test stimuli, each sentence was recorded in one take, so prime signs were not cross spliced across sentences (see Section 2.5). Videos had a width of 720 pixels and a height of 576 pixels (corresponding to a size of approximately 25 by 20 cm on screen). Each DGS sentence was preceded by 2 seconds in which the signer remained still before starting to sign. After the end of the sentence, the signer again remained motionless for about 1 second. On average, the stimulus videos had a length of 9.34 seconds (sd 1.04). Prime signs started on average 4.404 seconds into the video (sd 0.83) and had an average length of 0.532 seconds (sd 0.13). Target signs started on average 6.057 seconds into the video (sd 0.86) and had an average length of 0.505 seconds (sd 0.13). The inter stimulus interval between prime offsets and target onsets was on average 1.122 seconds (sd 0.19), consisting of the intermediate INDEX sign and the preceding and subsequent transition phases between signs. Transition phases can be comparatively long and last up to 200 ms (cf. Hosemann et al. 2013; Jantunen 2013). There were no significant differences in the length of the intervals between primes and targets across conditions: within-language priming: 1.149 second [sd 0.22], within-language control: 1.074 seconds [sd 0.21], cross-language priming: 1.137 seconds [sd 0.18], cross-language control: 1.128 seconds [sd 0.15]; p = 0.28). None of the other measures, e.g., duration of prime and target signs and length of stimulus videos, differed between related and control conditions (ps > 0.16). In addition to the stimulus sentences, we recorded 8 practice sentences that were similar in structure, but included no prime and target signs. Furthermore, every 8th critical sentence we interposed a yes/no-question about the content of one of the preceding eight test sentences to ensure participants were attending to the presented stimuli. There was a total of 20 yes/no-questions (10 questions with a correct ‘yes’-answer, 10 with a ‘no’-answer).

We instructed our model to sign as naturally as possible. To ensure clarity, signing was marginally slower compared to natural conversation speed, but included nonmanual actions when appropriate, except of mouthing on prime and target signs. Nonmanual components (like movements of the head and upper body, facial expressions, eye movement, and mouth actions) constitute an essential grammatical part of sign languages and can have several linguistic functions (Pfau & Quer 2010; Sandler 2012; Wilbur 2012; Herrmann & Steinbach 2013). The term mouthing refers to the silent pronunciation of (parts of) the corresponding spoken word concurrent with sign production (Boyes Braem & Sutton-Spence 2001). We explicitly instructed our model to omit mouthing on prime and target signs to ensure that no co-activation of the German word would arise merely due to processing of the overt visual mouthing cues. This also avoided that target recognition might be facilitated due to similar mouthing cues during prime production. The absence of mouthing on prime and target signs increased the potential ambiguity of the signs, since there are dialectal variations and homophonic signs in DGS. Signs in DGS do not necessarily have a one-to-one translation in German, but can be rather ambiguous. For example, the DGS sign CUP is a homophone to the sign COFFEE and can further mean ‘drink coffee out of a cup’. In order to ensure that signers retrieved the intended concept when presented with a particular sign, we administered a translation task, in which participants saw the 40 critical videos of the cross-language priming condition and had to translate the content of each video into written German. Sentences with an unintended translation of prime or target sign were individually excluded from the single subject analysis.

2.3 Procedure

Participants were seated in a dimly lit experimental room in front of a 92 × 50 cm TV screen at a distance of approximately one meter from the screen. In order to exclude any spoken German influence during the experiment, all conversation before, during, and after the experiment was in DGS. After giving informed written consent to their participation, participants saw an introduction video in DGS explaining the task facing them. They were asked to watch the upcoming videos and to answer the interspersed yes/no-questions by pressing the corresponding buttons on an X-Box controller. Next, participants were presented with two practice blocks, which included 8 trials presenting a sentence video, and one trial presenting a yes/no-question. There was no overlap between stimuli in the practice block and the main experiment. The experiment started following the practice block after all of the participants remaining questions were clarified. The experimental session was split into 4 blocks, each containing 40 critical sentence trials interspersed with five question trials in which participants were asked a question related to the content of one of the preceding eight sentences. Sentences of each condition (within-language priming, within-language control, cross-language priming, cross-language control) were equally allocated to the 4 blocks and presented in a pseudo-randomized fashion so that target signs were not repeated within blocks. The order of blocks was counterbalanced across participants. Question trials were inserted after every 8th test trial. Sentence trials were presented automatically and separated by 1000 ms during which a blank screen was shown. Sentence trials after question trials would start 1000 ms after a response button was pressed. No feedback on the accuracy of the response was given. After every block, participants could take a break and continue the experiment by pressing a button on the response box. After the experimental session, participants were asked to fill out the post-experimental translation task, which was used to control whether participants activated the intended German translation of the critical signs.

2.4 EEG recording

EEG data were recorded using a Biosemi Active Two Amplifier system. Data was recorded from 32 Ag/AgCl electrodes, mounted in an elastic cap according to the international 10–20 system, with a sampling rate of 2048 Hz (see Figure 2 for electrode montage and analysis configuration). Electrode offsets were kept <20 μV. EEG recordings were referenced offline to the average left and right mastoid reference. The electrooculogram (EOG) was recorded for each participant from three electrodes, one at the outer canthi of each eye (horizontal EOG), and one below the left eye (vertical EOG).

Figure 2
Figure 2

Electrode montage with reference to the 10–20 system. Squares indicate which electrodes were combined into regions of interest for statistical analysis. Regions of interest were formed over midline electrodes and fronto-central, centro-parietal and parieto-occipital electrodes in both hemispheres.

2.5 Trigger placing

When analyzing ERPs related to a target sign within the processing of an ongoing unspliced signing stream, the corresponding trigger position needs to be identified according to visual cues within the signing stream. In the present study, we analyzed ERPs related to the trigger sign onset, defined as the first frame of the target sign in which the target hand configuration (i.e., handshape and orientation) reaches its initial location, right before the signs path movement, as in accordance with other sign language ERP priming studies (e.g., Gutiérrez et al. 2012a; b; Hosemann et al. 2013).

2.6 EEG data preprocessing and statistical analysis

The raw EEG data was filtered offline with a 0.01 Hz high-pass and a 30 Hz low-pass filter. Single subject averages were calculated per condition and electrode between –200 and 1000 ms relative to the trigger sign onset. Trials that contained eye blinks and other artifacts were rejected using a 120 Hz amplitude cut-off threshold. Hosemann et al. (2013) investigated ERP correlates triggered to three different time-points within the ongoing signing stream. The results suggested that unexpected signs engendered an N400 prior to the sign onset, elicited by pre-lexical cues in the transition phase. In other words, the transition phase may pre-empt processing of the target sign. Given that the transition phase already includes information regarding the identity of the upcoming sign, we did not baseline correct the data to the time period prior to the onset of the sign. Previous studies on sign language processing similarly report uncorrected data for this reason (Hosemann et al. 2013). Nevertheless, to allow comparison with previous studies examining similar issues in spoken language processing, we also report baseline corrected data in Section 3.2 Footnote 2. Three out of the 15 deaf participants had to be excluded from further analysis, due to excessive eye movement artifacts and/or major EEG drifts.

For each participant, we excluded those trials where participants translated the signs differently to our intended translations in the post-experimental translation task. This translation task was conducted for the cross-language priming condition but not for the within-language priming condition, which led to a different number of trials that entered the analysis. On average, participants translated prime and target as intended in 51,67% of the cases (mean of ‘correctly’ translated items: 20.67; range 13–25; sd 3.47). Hence, on average 20 items (range: 13–25) per participant entered the analysis for the cross-language priming condition. The comparatively high number of ‘falsely’ translated videos can be explained by two factors: First, only translations with the exact intended German minimal pairs were included in the analysis. We excluded those sentences from the analysis that contained semantically related word translations, like, for example, the hypernym flower instead of the intended word rose (note that the DGS signs FLOWER and ROSE are identical). Second, as already mentioned above, signs have a higher contextual ambiguity compared to German words and are often disambiguated in DGS by mouthing. Since we did not present prime and target signs with corresponding mouthing (see Section 2.2), we may have increased the potential ambiguity of these signs. After artifact and translation rejection, a total of 248 out of 480 items and their control counterparts entered the analysis for the cross-language priming condition. For the within-language priming condition, after artifact rejection a total of 469 out of 480 items and their control counterparts entered the analysis. Note that each condition (within-language and cross-language) had its own control sentences, so that the dropout between primed and unprimed sentences was consistent within each condition. Grand averages were computed over single subject averages.

For the statistical analysis, repeated analyses of variance (ANOVAs) were calculated for the factors CONDITION (within-language vs. cross-language); PRIMING (priming vs. control); HEMISPHERE (left vs. right); and REGION. Three regions of interests were formed for each hemisphere: fronto-central (F3, F4, FC1, FC2), centro-parietal (C3, C4, CP1, CP2), and parieto-occipital (P3, P4, PO3, PO4). A separate ANOVA examined brain potentials across midline electrodes Fz, Cz, and Pz. The statistical analysis was carried out in a hierarchical manner. Where appropriate, a Greenhouse-Geisser correction was applied (Greenhouse & Geisser 1959).

3 Results

3.1 Behavioral data

After each series of 8 sentences, participants were presented with a yes/no-question regarding the content of one of the previously seen videos to control for general attention. Participants gave the correct, intended answer to 72,5% (total 174/240; participant mean 16, sd 3) and an incorrect answer to 27,5% (66/240; participant mean 4, sd 3) of the questions.1 Since the correct answers were significantly above chance (X2(1) = 48.6, p < 0.01), we assumed that participants were sufficiently attentive during the experiment and so we did not exclude any participants from analysis on the basis of the behavioral data.

3.2 ERP data

3.2.1 300 to 400 ms

In a preliminary analysis, the omnibus ANOVA with the factors CONDITION (within-language vs. cross-language), PRIMING (priming vs. control), HEMISPHERE (left vs. right), and REGION (fronto-central, centro-parietal, parieto-occipital) revealed no significant results (ps > 0.05).

3.2.2 450 to 650 ms

An omnibus ANOVA with the factors CONDITION (within-language vs. cross-language), PRIMING (priming vs. control), HEMISPHERE (left vs. right), and REGION (fronto-central, centro-parietal, parieto-occipital) revealed a main effect of priming (F(1,11) = 12.99, p < 0.01). An omnibus ANOVA with the factors CONDITION, PRIMING and REGION on the midline electrodes shows a similar main effect of priming (F(1,11) = 13.97, p < 0.01). No other main effects or interactions reached significance (ps > 0.1).2 The absence of an interaction between PRIMING and CONDITION suggests that both the within-language and cross-language priming conditions elicited a similar priming effect in our deaf participants. Nevertheless, despite the absence of an interaction between these two factors, planned comparisons were computed to ensure that a similar priming effect was obtained in both conditions. These analyses confirmed that ERPs to targets preceded by within-language related primes were significantly different from ERPs to the same targets preceded by unrelated primes (hemispheric electrodes: n.s.; midline electrodes: (F(1,11) = 5.13, p < 0.05). Similarly, ERPs to targets preceded by cross-language related primes were significantly different from ERPs to the same targets preceded by unrelated primes (hemispheric electrodes: F (1, 11) = 7.75, p < 0.05; midline electrodes: F (1, 11) = 8.062, p < 0.05).

Figure 3 presents grand averages of ERPs time-locked to the sign onset of the target sign for within-language priming conditions as well as for cross-language priming conditions. As can be seen from the graphs, target signs in primed conditions (marked in red) led to ERPs with a less negative deflection compared to target signs in un-primed control conditions (marked in blue). Based on visual inspection, the elicited ERP signal deviates from a classical N400 in that primed conditions show a positive deflection compared to unprimed conditions. However, the N400 does not need to be negative in absolute terms (see review in Kutas & Federmeier 2011). We will come back to this in more detail in the discussion section. Interestingly, primed target signs both in the within- and cross-language priming condition elicited a similar neurophysiological response. This indicates that overt sign phonological priming as well as covert orthographic priming via German translation equivalents modulated the processing of target signs.

Figure 3
Figure 3

Grand average ERPs for (A) target signs in within-language conditions (primed targets = dark red, unprimed control targets = dark blue); and for (B) target signs in cross-language conditions (primed targets = bright red, unprimed targets = bright blue). Negativity is plotted upwards. The related trigger position is the sign onset. For visual presentation of the plots we used a 0.1–20 Hz display filter.

4 Discussion

In the present study, we examined whether congenitally deaf bimodal bilinguals co-activate L2 written word representations during the processing of sentences in their L1 sign language DGS. We measured ERP responses to the processing of target signs embedded in a sentence context. Target signs were either preceded by an overt sign-phonologically related prime sign (within-language priming), or by a prime sign whose German translation was an orthographic and phonological minimal pair to the target sign’s German translation by sharing the rime (cross-language priming). In within-language priming, we predicted reduced negative deflections to primed vs. unprimed targets, due to facilitated recognition based on the overt sign-phonological overlap. In cross-language priming, we also predicted facilitated recognition based on co-activation of the orthographic/phonological related German translation equivalents. As expected, we found a robust priming effect within as well as across languages: Primed target signs engendered a less negative deflection with a mostly central scalp distribution. ERPs were significantly less negative 450–650 ms after sign-onset in both primed conditions compared to the control conditions. This indicates that the processing of target signs was modulated by the overlap with prime signs. Crucially, this was not only the case for prime-target pairs with overt sign-phonological overlap in DGS, but also for prime-target pairs that did not overlap as signs but that overlapped only in the orthography/phonology of their German translations. Extending previous studies with bimodal bilinguals on cross-language co-activation (see Section 1.1), the current study provides evidence that deaf bimodal bilinguals activate less prominent L2 written word representations during processing of L1 signs in continuous sentence contexts. That is to say the data provide evidence for cross-language co-activation from sign to word in the processing of signed sentences.

4.1 Interpretation of ERP data

The EEG data shows less negative going deflections to primed sentences compared to unrelated sentences 450–650 ms after target sign onset. We interpret the data to reflect modulations in the N400 component elicited by the sign language and spoken/written language translation overlap between prime and target, even though we note that polarity and timing does not fit the canonical N400 response. Yet, the N400 does not need to be negative in absolute terms (Kutas & Federmeier 2011) and delayed N400 effects have previously been reported for bilingual language processing (e.g., Phillips et al. 2006; Van Hell & Kroll 2013). The polarity and timing of the effect might further be related to methodological aspects of the study: First, because we presented target signs within a fluent stream of signs, there is a natural transition phase prior to the sign onset, in which the sign is not yet recognizable but in which some cues are already present (see also Section 2.5). Second, similar to previous studies investigating sign language processing (Hosemann et al. 2013) we did not baseline correct our data because pre-lexical cues in the transition phase might pre-empt processing of the target sign (see also Section 2.6). Both aspects might have contributed to a less canonical response pattern. We therefore take our data as evidence for a priming effect for related relative to unrelated targets. In particular, we argue that the increased overlap between primes and targets in related conditions facilitated processing of the target. In the within-language priming condition, phonological parameters of the target may have been pre-activated by the presentation of the prime. In the cross-language priming condition, the L2 translation of the prime may have led to pre-activation of the orthographically/phonologically related L2 translation of the target. In both cases this pre-activation leads to facilitated target recognition in primed compared to unprimed conditions.

The timing and the positive deflection might also suggest an alternative interpretation in terms of a late positive component (LPC), which is typically associated with recognition memory. In the context of bilingualism, an LPC has been reported in research with hearing unimodal bilinguals processing a code switch (e.g., Moreno et al. 2002; FitzPatrick & Indefrey 2014). With regard to the results of our study, neither the positivity engendered by within-language primed target signs, nor by cross-language primed target signs can be explained in terms of an overt code-switching event. However, the covert overlap between the German translations of the sign language prime-target pairs or the overt overlap in sign language phonology might increase the salience of the target word. Especially in terms of co-activation of the L2 translation of the L1 sign, the results could be interpreted as indexing the co-activation of unexpected task relevant information. Nevertheless, it should be noted that the LPC has been reported to be strongest on parietal electrodes (Duarte et al. 2004; Rugg & Curran 2007; Voss et al. 2010) while the effect in our study was most pronounced on central electrodes. This speaks for an interpretation in terms of the N400 outlined above.

Taken together, the observed cross-language effect strongly suggests (automatic) co-activation of the L2 German translation equivalents of L1 signs during L1 sentence processing: Deaf bimodal bilinguals co-activate the German translations of DGS signs during signed sentence processing, i.e., information from the L2 (German) is co-activated across modalities during fluent L1 sentence processing (DGS). Here, it is important to qualify the dominance of the two languages across different groups of signers (i.e. hearing and deaf), especially in the context of previous studies examining language non-selective lexical access in signers. As already discussed in Section 1.1, Lee et al. (2019) show that the dominance of spoken/written and sign language in hearing and deaf bimodal bilinguals differs dramatically. Although hearing bimodal bilinguals acquire the sign language as their native language from their parents, through school education and in broader sociolinguistic contexts, the spoken language often becomes their dominant language. For congenitally deaf bimodal bilinguals, however, the sign language is, naturally, the dominant language (Emmorey et al. 2013; 2016; Pizer et al. 2013). The results reported in the current study, therefore, provides evidence that the less dominant, perceptually non-overlapping L2 words are co-activated during L1 signed sentence processing. This has important implications for models of bilingual word recognition concerning the nature of cross-talk between two languages during processing, which we discuss next.

4.2 Theoretical account of co-activation in bimodal bilinguals

Although theoretical models of bilingual word recognition focus on hearing unimodal bilinguals, in what follows, we review them with regard to bimodal bilingual sign/word recognition. The Revised Hierarchical Model (RHM) by Kroll & Stewart (1994) proposes that there are strong connections from L2 to L1 on the lexical level, because individuals learn a second language on the basis of first language mediation. Thus, individuals at a relatively early stage of L2 learning process L2 items via their L1 translations. However, processing of L2 items via the corresponding L1 translation reduces with advanced L2 competence. Thus, more proficient L2 speakers can directly access concepts for L2 items. In the present study, we demonstrate strong connections between L1 sign representations and L2 word representations that enables co-activation from L1 to L2, even during L1 sentence processing. While the RHM model does not preclude parallel activation of L2 during L1 processing (since L1 and L2 share a conceptual system), RHM focuses on (semantic) L1 activation during L2 processing and does not directly address how cross-modal cross-language non-selective lexical access between L1 sign representations and L2 orthographic representations of words comes about.

In contrast to RHM, the Bilingual Interactive Activation model (BIA+, Dijkstra & Heuven 2002) allows greater insight into the mechanisms underlying cross-language priming effects in orthographic word recognition. BIA+ assumes that language activation is not language-selective, i.e., that input in one language automatically activates word candidates in both languages of a bilingual. Hence, processing a prime word presented in one language affects the processing of an upcoming target word of the second language. The strongest evidence comes from orthographic word recognition tasks with cognate words, which not only share meaning but also overlap in orthography across languages (such as the Dutch-English cognate tomaattomato). Conceptually, BIA+ assumes separate lexical entries for L1 and L2. Thus, in contrast to RHM, access of L2 semantics is not mediated by L1, but L2 entails separate word form-meaning associations. Importantly, BIA+ assumes an integrated lexicon containing words from both languages, thus languages are not functionally separated. Although we find priming effects suggestive of parallel activation of both languages – as predicted by BIA+ –, the model may have difficulties capturing the pattern of effects reported here. BIA+ is, in principle, a model of orthographic recognition and assumes activation of only direct competitors. The similarity of the input is a decisive premise for the model, which by definition does not exist in the present case of sign and word representations of different language modalities. Thus, the mechanisms underlying the co-activation of the non-overlapping other language representation remains unclear within the structure proposed by BIA+.

The most promising explanation of the current results stems from the Bilingual Language Interaction Network for Comprehension of Speech model (BLINCS) by Shook & Marian (2013). BLINCS is a computational model of bilingual language processing that consists of an interconnected network of self-organizing maps (SOM). Self-organizing maps are based on unsupervised learning algorithms. The input received by the SOM is mapped on to the next ‘best match’ output via emerging learning associations (such as associations between word form and meaning), and associations that share features or consistently co-occur and cluster together (such as semantically related words). BLINCS contains several different layers for different representational levels in the lexicon: i.e., phonological, semantic, phono-lexical and ortho-lexical layers. Each layer is a self-organized map. They are bi-directionally connected so that activation can spread within as well as between levels. Both languages of a bilingual are represented in the same SOMs, i.e., they are not functionally distinct. Competition as well as co-activation between languages is a result of these shared representations. In the case of unimodal bilingualism, the model postulates that phonological and semantic levels are shared across languages, i.e., they rely on the same basic semantic and phonological features for both languages. Phono-lexical and ortho-lexical levels are not directly shared but overlapping, e.g., for cognate words. Within the phono- and ortho-lexical level, there are lateral connections between words within and across languages, so there can be lateral competition. Furthermore, all levels can feed back to higher and lower levels: For example, semantics can restrict activation at the phono-lexical level and the phono-lexical level can feed back to the phonological level (cf. Shook & Marian 2013: 306).

In the case of bimodal bilingualism, the modalities of the input differ and cross-language co-activation cannot, therefore, rely on the same phonemes. However, the model already assumes a visual information level to capture mouth representation cues such as those underlying the McGurk effect (McGurk & MacDonald 1976) and to restrict activation to words relevant to the visual context. Furthermore, BLINCS predicts that activation will spread within and across layers such that not only direct translations across languages are co-activated, but also related words within the non-target language receive some activation. Thus, expanding the model to incorporate another input layer to capture the visual input of sign phonological features would capture the findings reported in the current study, with regard to the co-activation of non-overlapping other language representations.

The modified version of BLINCS incorporating bimodal bilingual representation levels would, nevertheless, have to answer two crucial questions: First, what are the sub-lexical features of word representations in congenitally deaf people (spoken and/or written word representations), as per definition they cannot be based on auditory phonological features? Second, if activation is spread between the sign-phonological level and the spoken/written word representation level, what are the features shared by both modality different levels? With regard to the first question, we argue that sub-lexical features of word presentations in congenitally deaf people consist of visual cues such as mouth representations of spoken words and orthographic representations of written words. MacSweeney et al. (2013) provide evidence that congenitally deaf people have some sort of phonological representation of spoken words. The authors recorded EEG of deaf and hearing participants during the presentation of English phonologically rhyming and non-rhyming word pairs (rhyming: bear-fair; non-rhyming: scar-fair). Note that – unlike German with a phonologically more transparent orthography – these English word pairs rhyme in their phonology but deviate in their orthography. Interestingly, both deaf and hearing participants showed an enhanced negativity to non-rhyming word pairs compared to rhyming word pairs at approximately 450 ms. In other words, English word pairs that rhyme phonologically but do not overlap orthographically (bear-fair) showed a more positive going ERP response compared to non-rhyming English word pairs (scar-fair). Given the absence of orthographical overlap, this effect can only be explained in terms of the phonological overlap between the words. The authors argue on the basis of this data that phonological processing is, to a large extent, amodal or supramodal, a notion supported by an fMRI study on English rhyme judgments with deaf participants (MacSweeney et al. 2008). In other words, a certain degree of phonological processing accompanies orthographic processing in congenitally deaf populations. With regard to our stimuli, German translation equivalents of prime-target pairs in the cross-modal condition were orthographic minimal pairs as well as phonological minimal pairs in German. We assume, therefore, that sub-lexical representations of German words either consist of only visual features, such as orthographic and mouth representation features, or that they consist of a combination of visual and amodal phonological features, such that overlap at either or both of these levels can be used to detect cross-language co-activation of other language representations (as in the current study).

The second question concerns the modality-independent shared lexical features that allow for cross-modal language co-activation. The phonological representations of signs include features of the visual-manual modality, such as handshape, orientation, location and movement of the signs. Further, orthographic representations of words include graphemic and/or amodal phonological features, which have no overlap with sign phonology. Previous studies on cross-modal cross-language co-activation (Morford et al. 2011; Ormel et al. 2012; Shook & Marian 2012; Kubus et al. 2014) discuss two alternatives for the connections mediating cross-modal co-activation: (i) an interconnection via a shared semantic node at the semantic level (Ormel et al. 2012: 301), and (ii) an interconnection via a direct, non-semantic link at the lexical level (Shook & Marian 2012: 321). The critical difference between both explanations lies in the assumption of a shared semantic node that mediates cross-modal language co-activation, similar to the propositions made by the RHM model.

We suggest, however, that an alternative link between sign and word representations comes from visual cues of mouth representations of spoken words, also called mouthing. The mouthing of a sign is defined as the silent articulation of (parts of) the corresponding spoken word simultaneously to producing the sign. For example, when producing the manual DGS sign MOTHER, the lips purse and silently articulate the syllable mu. As such, mouthing is a nonmanual component occurring simultaneously to manual signs in sign language production and it spells out (parts of) spoken words. Assuming an adapted version of BLINCS for bimodal bilinguals, it is possible that the corresponding mouthing of a sign constitutes a feature of the sub-lexical representation of signs as well as a feature of the sub-lexical representation of words. The phonological SOM would thus not be a set of phonemes shared by L1 and L2, as it is for unimodal bilinguals. Rather, the phonological SOM in the bimodal bilingual model would be a set of (a) sign-phonological features and (b) spoken language sub-lexical features. This would include the manual and nonmanual elements of sign languages – such as handshape, orientation, location, and movement of the hands, as well as lexicalized mimic, head and upper body positions, and mouthing. Hence, the representation would also include the sub-lexical features of spoken languages – such as orthographic representations, potentially amodal phonological features and visual cues of mouth representations. Crucially, the visual representation of the mouthing is shared by L1 sign features as well as by L2 spoken word features. Figure 4 presents an adapted version of the BLINCS model for (deaf) bimodal bilingual language processing.

Figure 4
Figure 4

Adapted version of the Bilingual Language Interaction Network for Comprehension of Speech model (BLINCS) by Shook & Marian (2013: 306). In this adapted version for (deaf) bimodal bilinguals, we propose visual information as input modality. In contrast to unimodal bilinguals, who have shared phonological features for both their spoken languages, we suggest that the phonological level of deaf bimodal bilinguals contains features only related to L1 (signs), features only related to L2 (words), and shared phonological features such as visual cues of mouthing. The arrows between levels indicate a spread of activation across the different levels of representations. According to the original BLINCS model, “[t]here are bi-directional excitatory connections between and within each level of the model, and inhibitory connections at the phono-lexical and ortho-lexical levels.” (Shook & Marian 2013: 306). Whether the inhibitory connections also account for (deaf) bimodal bilinguals, is up for future research.

Cross-modal language co-activation in bimodal bilinguals could, therefore, be triggered at a very early stage of language processing by the shared feature of visual mouth representations. Thus, the mouthing component of a sign representation could co-activate the corresponding spoken word representation and its neighbors. In the design of our stimulus material, we deliberately presented the critical prime and target signs without mouthing (see Section 2.2) to prevent facilitation of target recognition based on similar mouthing cues during prime production. But, since activation in the BLINCS model interacts within and across levels, the manually presented signs could have co-activated the corresponding mouthing features and by that the corresponding German word representation, leading to the reported priming effect.

Further evidence for the idea of mouthing to be a connective link between sign and word representations is provided by a case study of a deaf bilingual women (British Sign Language (BSL) – English) with left-hemispheric aphasia (Marshall et al. 2005). Although ‘Maureen’ showed no spontaneous language production, and her comprehension of BSL and English was severely impaired, especially on the semantic level, she could be cued to produce English words when presented with a sign plus mouthing. Since Maureen could not be cued to produce English words by gestures, Marshall et al. (2005) argue that English word representations and BSL sign representations are not mediated via a semantic node, but rather directly linked in Maureen’s lexicon via the mouthing. “The above evidence suggests that mouthing should be viewed as a bilingual contact phenomenon […].” (Marshall et al. 2005: 733).

Similarly, neuroimaging studies with deaf bimodal bilinguals provide neurocognitive evidence for cortical overlap in processing signs with mouthing and processing speech. Capek et al. (2008) investigated whether the cortical correlates of processing sign language differ from the processing of seen speech. In other words, they examined whether the cortical correlates of processing signs with mouthing differ from the processing of signs with non-speech mouth actions (like, e.g., mouth gestures or echo phonology; Woll, 2001). Results indicate that signs with mouthing and seen speech are processed in similar regions distinct from those regions used for processing solely manual sign and signs with mouth action. This supports the assumption that signs with mouthing have a structural overlap with spoken words that blurs the distinction between their sub-lexical representations.

However, the linguistic status of mouthing is as yet unclear and a highly controversial topic in sign language research (for an overview see Boyes Braem & Sutton-Spence 2001). Indeed, the results of the current study do not provide a conclusive answer to whether cross-language co-activation is mediated via a semantic node, via a direct lexical link, or via mouthing as a shared sub-lexical representation. However, we believe that mouthing may provide an additional link between the two languages of a deaf bimodal bilingual that bypasses the semantic or lexical route. A concrete way to test this, would be to test the degree of co-activation by differentiating between noun prime-target pairs and other word class prime-target pairs, because mouthing is often used to a larger extent with nouns.3 Hence, we highlight the need for further research on this issue, with respect to the function of mouthing as a nonmanual component in the sub-lexical representation of signs, as well as with respect to the kind of sub-lexical representations of spoken words in congenitally deaf people who are not exposed to auditory input.

5 Summary and conclusions

The present ERP study on DGS priming demonstrates that target signs preceded by sign-phonologically overlapping prime signs can engender a priming effect during sentence processing (within language priming effect). The study also presents evidence for co-activation of second language words during native sign language processing. Target signs preceded by sign-phonologically unrelated primes but with orthographically/phonologically related German translation equivalents also engendered a priming effect (cross-modal cross-language priming effect). This indicates that signs with an overt phonological overlap can prime each other during sentence processing. Furthermore – and more interestingly – deaf bimodal bilinguals also seem to activate orthographic/phonological representations of L2 spoken words during L1 sign language sentence processing. We have discussed several explanations for this cross-modal co-activation based on semantic and non-semantic links. However, we favor an alternative explanation that assumes the mouthing of a sign to be a shared representational component on the sub-lexical level of sign and word representations. We, therefore, argue that overt phonological priming occurs during natural sign language sentence processing and that the processing of their native sign language by congenitally deaf native signers can lead to the co-activation of orthographic/phonological representations of their second (spoken) language.

Additional File

The additional file for this article can be found as follows:

Appendix

Complete list of stimulus sentences. DOI: https://doi.org/10.5334/gjgl.1014.s1

Notes

  1. Note that one participant confused the response buttons and gave incorrect answers to all 20 questions. Data was recoded for this participant. [^]
  2. An analysis including a baseline correction –200 to –100 ms prior to the trigger position revealed no difference in the overall effects. It showed an overall main effect of condition (within-language vs. cross-language): hemispheric electrodes F(1,11) = 7.67, p < 0.05, midline electrodes F(1,11) = 5.43, p < 0.05; and of priming (priming vs. control): hemispheric electrodes F(1,11) = 15.28, p < 0.01, midline electrodes F(1,11) = 14.57, p < 0.01, but no interaction between the two factors. [^]
  3. We thank one anonymous reviewer for this idea. [^]

Acknowledgements

We are very grateful to three anonymous reviewers, who provided constructive critique that helped us to improve the paper. The usual disclaimers apply. We gratefully thank our deaf informant Roland Metz for a productive cooperation in discussing, developing and recording the DGS stimuli. We also gratefully thank Liona Paulus for assisting this study with the acquisition and instruction of deaf participants. Mostly, we are very impressed by and grateful for the effort all deaf participants took to support this research.

Competing Interests

The authors have no competing interests to declare.

References

Altmann, Gerry T. M. & Yuki Kamide. 1999. Incremental interpretation at verbs: Restricting the domain of subsequent reference. Cognition 73(3). 247–264. DOI:  http://doi.org/10.1016/S0010-0277(99)00059-1

Alvarez, Ruben P., Phillip J. Holcomb & Jonathan Grainger. 2003. Accessing word meaning in two languages: An event-related brain potential study of beginning bilinguals. Brain and Language 87(2). 290–304. DOI:  http://doi.org/10.1016/S0093-934X(03)00108-1

Ardal, Sten, Merlin W. Donald, Renata Meuter, Shannon Muldrew & Moira Luce. 1990. Brain responses to semantic incongruity in bilinguals. Brain and Language 39(2). 187–205. DOI:  http://doi.org/10.1016/0093-934X(90)90011-5

Baker, Anne & Bencie Woll (eds.). 2008. Sign language acquisition. Amsterdam: John Benjamins. DOI:  http://doi.org/10.1075/bct.14

Bornkessel-Schleswesky, Ina, Adrian Staub & Matthias Schleswesky. 2016. The timecourse of sentence processing in the brain. In Gregory Hickok & Steven L. Small (eds.), Neurobiology of language, 607–620. London: Academic Press. DOI:  http://doi.org/10.1016/B978-0-12-407794-2.00049-3

Boyes Braem, Penny & Rachel Sutton-Spence (eds.). 2001. The hands are the head of the mouth: The mouth as articulator in sign languages. Hamburg: Signum.

Brenders, Pascal, Janet G. Van Hell & Ton Dijkstra. 2011. Word recognition in child second language learners: Evidence from cognates and false friends. Journal of Experimental Child Psychology 109(4). 383–396. DOI:  http://doi.org/10.1016/j.jecp.2011.03.012

Capek, Cheryl M., Dafydd Waters, Bencie Woll, Mairéad MacSweene, Michael J. Brammer, Philip K. McGuire & Ruth Campbell. 2008. Hand and mouth: Cortical correlates of lexical processing in British Sign Language and speechreading English. Journal of Cognitive Neuroscience 20(7). 1220–1234. DOI:  http://doi.org/10.1162/jocn.2008.20084

Connolly, John F. & Natalie A. Phillips. 1994. Event-related potential components reflect phonological and semantic processing of the terminal word of spoken sentences. Journal of Cognitive Neuroscience 6. 256–266. DOI:  http://doi.org/10.1162/jocn.1994.6.3.256

Dijkstra, Ton, Jonathan Grainger & Walter J.B. van Heuven. 1999. Recognition of cognates and interlingual homographs: The neglected role of phonology. Journal of Memory and Language 41(4). 496–518. DOI:  http://doi.org/10.1006/jmla.1999.2654

Dijkstra, Ton & Walter J.B. van Heuven. 2002. The architecture of the bilingual word recognition system: From identification to decision. Bilingualism: Language and Cognition 5(3). 175–197. DOI:  http://doi.org/10.1017/S1366728902003012

Duarte, Audrey, Charan Ranganath, Laurel Winward, Dustin Hayward & Robert T. Knight. 2004. Dissociable neural correlates for familiarity and recollection during the encoding and retrieval of pictures. Cognitive Brain Research 18(3). 255–272. DOI:  http://doi.org/10.1016/j.cogbrainres.2003.10.010

Dumay, Nicolas, Abdelrhani Benraiss, Brian Barriol, Céline Colin, Monique Radeau & Mireille Besson. 2001. Behavioral and electrophysiological study of phonological priming between bisyllabic spoken words. Journal of Cognitive Neuroscience 13(1). 121–143. DOI:  http://doi.org/10.1162/089892901564117

Emmorey, Karen, Helsa B. Borinstein, Robin Thompson & Tamar H. Gollan. 2008. Bimodal bilingualism. Bilingualism: Language and Cognition 11(1). 43–61. DOI:  http://doi.org/10.1017/S1366728907003203

Emmorey, Karen, Marcel R. Giezen & Tamar H. Gollan. 2016. Psycholinguistic, cognitive, and neural implications of bimodal bilingualism. Bilingualism: Language and Cognition 19(2). 223–242. DOI:  http://doi.org/10.1017/S1366728915000085

Emmorey, Karen, Jennifer A.F. Petrich & Tamar H. Gollan. 2013. Bimodal bilingualism and the frequency-lag hypothesis. The Journal of Deaf Studies and Deaf Education 18(1). 1–11. DOI:  http://doi.org/10.1093/deafed/ens034

FitzPatrick, Ian & Peter Indefrey. 2014. Head start for target language in bilingual listening. Brain Research 1542. 111–130. DOI:  http://doi.org/10.1016/j.brainres.2013.10.014

Giezen, Marcel R. & Karen Emmorey. 2016. Language co-activation and lexical selection in bimodal bilinguals: Evidence from picture-word interference. Bilingualism: Language and Cognition 19(2). 264–276. DOI:  http://doi.org/10.1017/S1366728915000097

Greenhouse, Samuel W. & Seymour Geisser. 1959. On methods in the analysis of profile data. Psychometrika 24(2). 95–112. DOI:  http://doi.org/10.1007/BF02289823

Gutiérrez, Eva, Oliver Müller, Cristina Baus & Manuel Carreiras. 2012a. Electrophysiological evidence for phonological priming in Spanish Sign Language lexical access. Neuropsychologia 50(7). 1335–1346. DOI:  http://doi.org/10.1016/j.neuropsychologia.2012.02.018

Gutiérrez, Eva, Deborah Williams, Michael Grosvald & David Corina. 2012b. Lexical access in American Sign Language: An ERP investigation of effects of semantics and phonology. Brain Research 1468. 63–83. DOI:  http://doi.org/10.1016/j.brainres.2012.04.029

Herrmann, Annika & Markus Steinbach. (eds.). 2013. Nonmanuals in sign language. Amsterdam: John Benjamins. DOI:  http://doi.org/10.1075/bct.53

Hosemann, Jana, Annika Herrmann, Markus Steinbach, Ina Bornkessel-Schlesewsky & Matthias Schlesewsky. 2013. Lexical prediction via forward models: N400 evidence from German Sign Language. Neuropsychologia 51(11). 2224–2237. DOI:  http://doi.org/10.1016/j.neuropsychologia.2013.07.013

Jantunen, Tommi. 2013. Signs and transitions: Do they differ phonetically and does it matter? Sign Language Studies 13(2). 211–237. http://muse.jhu.edu/journals/sign_language_studies/v013/13.2.jantunen.html. DOI:  http://doi.org/10.1353/sls.2013.0004

Ju, Min & Paul A. Luce. 2004. Falling on sensitive ears: Constraints on bilingual lexical activation. Psychological Science 15(5). 314–318. DOI:  http://doi.org/10.1111/j.0956-7976.2004.00675.x

Kroll, Judith F. & Erika Stewart. 1994. Category interference in translation and picture naming: Evidence for asymmetric connections between bilingual memory representations. Journal of Memory and Language 33(2). 149–174. DOI:  http://doi.org/10.1006/jmla.1994.1008

Kubus, Okan, Agnes Villwock, Jill P. Morford & Christian Rathmann. 2014. Word recognition in deaf readers: Cross-language activation of German Sign Language and German. Applied Psycholinguistics 36(4). 1–24. DOI:  http://doi.org/10.1017/S0142716413000520

Kutas, Marta & Kara D. Federmeier. 2000. Electrophysiology reveals semantic memory use in language comprehension. Trends in Cognitive Sciences 4(12). 463–470. DOI:  http://doi.org/10.1016/S1364-6613(00)01560-6

Kutas, Marta & Kara D. Federmeier. 2011. Thirty years and counting: Finding meaning in the N400 component of the event-related brain potential (ERP). Annual Review of Psychology 62. 621–647. DOI:  http://doi.org/10.1146/annurev.psych.093008.131123

Kutas, Marta & Steven A. Hillyard. 1980. Reading senseless sentences: Brain potentials reflect semantic incongruity. Science 207(4427). 203–205. DOI:  http://doi.org/10.1126/science.7350657

Lagrou, Evelyne, Robert J. Hartsuiker & Wouter Duyck. 2011. Knowledge of a second language influences auditory word recognition in the native language. Journal of Experimental Psychology: Learning, Memory, and Cognition 37(4). 952–965. DOI:  http://doi.org/10.1037/a0023217

Lee, Brittany, Gabriela Meade, Katherine J. Midgley, Phillip J. Holcomb & Karen Emmorey. 2019. ERP evidence for co-activation of English words during recognition of American Sign Language signs. Brain Sciences 9(6). 148. 1–17. DOI:  http://doi.org/10.3390/brainsci9060148

Leeson, Lauren & John Saeed. 2012. Word order. In Roland Pfau, Markus Steinbach & Bencie Woll (eds.), Sign language. An international handbook, 245–265. Berlin: de Gruyter Mouton.

Lillo-Martin, Diane. 2009. Sign language acquisition studies. In Edith L. Bavin & Letitia R. Naigles (eds.), The Cambridge handbook of child language, 399–415. Cambridge: Cambridge University Press. DOI:  http://doi.org/10.1017/CBO9780511576164.022

Lillo-Martin, Diane, Ronice Müller de Quadros, Deborah Chen Pichler & Zoe Fieldsteel. 2014. Language choice in bimodal bilingual development. Frontiers in Psychology 5. 1163. 1–15. DOI:  http://doi.org/10.3389/fpsyg.2014.01163

MacSweeney, Mairéad, Usha Goswami & Helen Neville. 2013. The neurobiology of rhyme judgment by deaf and hearing adults: An ERP study. Journal of Cognitive Neuroscience 25(7). 1037–1048. DOI:  http://doi.org/10.1162/jocn_a_00373

MacSweeney, Mairéad, Dafydd Waters, Michael J. Brammer, Bencie Woll & Usha Goswami. 2008. Phonological processing in deaf signers and the impact of age of first language acquisition. NeuroImage 40(3). 1369–1379. DOI:  http://doi.org/10.1016/j.neuroimage.2007.12.047

Mani, Nivedita & Falk Huettig. 2012. Prediction during language processing is a piece of cake – but only for skilled producers. Journal of Experimental Psychology: Human Perception and Performance 38(4). 843–847. DOI:  http://doi.org/10.1037/a0029284

Marentette, Paula F. & Rachel I. Mayberry. 2000. Principles for an emerging phonological system: A case study of early ASL acquisition. In Charlene Ghamberlain, Jill P. Morford & Rachel I. Mayberry (eds.), Language acquisition by eye, 71–90. Mahwah, NJ: Lawrence Erlbaum.

Marian, Viorica & Michael Spivey. 2003a. Bilingual and monolingual processing of competing lexical items. Applied Psycholinguistics 24(2). 173–193. DOI:  http://doi.org/10.1017/S0142716403000092

Marian, Viorica & Michael Spivey. 2003b. Competing activation in bilingual language processing: Within- and between-language competition. Bilingualism: Language and Cognition 6(2). 97–115. DOI:  http://doi.org/10.1017/S1366728903001068

Marian, Viorica, Michael Spivey & Joy Hirsch. 2003. Shared and separate systems in bilingual language processing: Converging evidence from eyetracking and brain imaging. Brain and Language 86(1). 70–82. DOI:  http://doi.org/10.1016/S0093-934X(02)00535-7

Marshall, Jane, Jo Atkinson, Bencie Woll & Alice Thacker. 2005. Aphasia in a bilingual user of British Sign Language and English: Effects of cross-linguistic cues. Cognitive Neuropsychology 22(6). 719–736. DOI:  http://doi.org/10.1080/02643290442000266

McClelland, James L. & Jeffrey L. Elman. 1986. The TRACE model of speech perception. Cognitive Psychology 18(1). 1–86. DOI:  http://doi.org/10.1016/0010-0285(86)90015-0

McGurk, Harry & John MacDonald. 1976. Hearing lips and seeing voices. Nature 264. 746–748. DOI:  http://doi.org/10.1038/264746a0

Meade, Gabriela, Kathrine J. Midgley, Zed Sevcikova Sehyr, Phillip J. Holcomb & Karen Emmorey. 2017. Implicit co?activation of American Sign Language in deaf readers: An ERP study. Brain and Language 170. 50–61. DOI:  http://doi.org/10.1016/j.bandl.2017.03.004

Moreno, Eva M., Kara D. Federmeier & Marta Kutas. 2002. Switching languages, switching palabras (words): An electrophysiological study of code switching. Brain and Language 80(2). 188–207. DOI:  http://doi.org/10.1006/brln.2001.2588

Moreno, Eva M. & Marta Kutas. 2005. Processing semantic anomalies in two languages: An electrophysiological exploration in both languages of Spanish-English bilinguals. Cognitive Brain Research 22(2). 205–220. DOI:  http://doi.org/10.1016/j.cogbrainres.2004.08.010

Morford, Jill P., Judith F. Kroll, Pilar Piñar & Erin Wilkinson. 2014. Bilingual word recognition in deaf and hearing signers: Effects of proficiency and language dominance on cross-language activation. Second Language Research 30(2). 251–271. DOI:  http://doi.org/10.1177/0267658313503467

Morford, Jill P., Erin Wilkinson, Agnes Villwock, Pilar Piñar & Judith F. Kroll. 2011. When deaf signers read English: Do written words activate their sign translations? Cognition 118(2). 286–292. DOI:  http://doi.org/10.1016/j.cognition.2010.11.006

Ormel, Ellen, Daan Hermans, Harry Knoors & Ludo Verhoeven. 2012. Cross-language effects in written word recognition: The case of bilingual deaf children. Bilingualism: Language and Cognition 15(2). 288–303. DOI:  http://doi.org/10.1017/S1366728911000319

Perniss, Pamela, Roland Pfau & Markus Steinbach. 2007. Can’t you see the difference? Sources of variation in sign language structure. In Pamela Perniss, Roland Pfau & Markus Steinbach (eds.), Visible variation. Comparative studies on sign language structure, 1–34. Berlin: Mouton de Gruyter. DOI:  http://doi.org/10.1515/9783110198850

Pfau, Roland & Josep Quer. 2010. Nonmanuals: Their prosodic and grammatical roles. In Diane Brentari (ed.), Sign languages, 381–402. Cambridge: Cambridge University Press. DOI:  http://doi.org/10.1017/CBO9780511712203.018

Phillips, Natalie A., Denise Klein, Julie Mercier & Chloé de Boysson. 2006. ERP measures of auditory word repetition and translation priming in bilinguals. Brain Research 1125(1). 116–131. DOI:  http://doi.org/10.1016/j.brainres.2006.10.002

Pizer, Ginger, Keith Walters & Richard P. Meier. 2013. “We communicated that way for a reason”: Language practices and language ideologies among hearing adults whose parents are deaf. Journal of Deaf Studies and Deaf Education 18(1). 75–92. DOI:  http://doi.org/10.1093/deafed/ens031

Plaza-Pust, Carolina & Knut Weinmeister. 2008. Bilingual acquisition of German Sign Language and written German: Developmental asynchronies and language contact. In Ronice Müller de Quadros (ed.), Sign languages: Spinning and unraveling the past, present and future. TISLR9, forty five papers and three posters from the 9th. Theoretical Issues in Sign Language Research Conference, Florianópolis, Brazil, December 2006, 497–529. Petrópolis, RJ, Brazil: Editora Arara Azul.

Poarch, Gregory J. & Janet G. Van Hell. 2012. Executive functions and inhibitory control in multilingual children: Evidence from second-language learners, bilinguals, and trilinguals. Journal of Experimental Child Psychology 113(4). 535–551. DOI:  http://doi.org/10.1016/j.jecp.2012.06.013

Praamstra, Peter, Antje S. Meyer & Willem J.M. Levelt. 1994. Neurophysiological manifestations of phonological processing: Latency variation of a negative ERP component timelocked to phonological mismatch. Journal of Cognitive Neuroscience 6(3). 204–219. DOI:  http://doi.org/10.1162/jocn.1994.6.3.204

Rugg, Michael D. & Tim Curran. 2007. Event-related potentials and recognition memory. Trends in Cognitive Sciences 11(6). 251–257. DOI:  http://doi.org/10.1016/j.tics.2007.04.004

Sandler, Wendy. 2012. Visual prosody. In Roland Pfau, Markus Steinbach & Bencie Woll (eds.), Sign language. An international handbook, 55–76. Berlin: de Gruyter Mouton.

Schulpen, Béryl, Ton Dijkstra, Herbert J. Schriefers & Mark Hasper. 2003. Recognition of interlingual homophones in bilingual auditory word recognition. Journal of Experimental Psychology: Human Perception and Performance 29(6). 1155–1177. DOI:  http://doi.org/10.1037/0096-1523.29.6.1155

Shook, Antony & Viorica Marian. 2009. Language processing in bimodal bilinguals. In Earl F. Caldwell (ed.), Bilinguals: Cognition, education and language processing, 35–64. Hauppauge, NY: Nova Science Publishers.

Shook, Anthony & Viorica Marian. 2012. Bimodal bilinguals co-activate both languages during spoken comprehension. Cognition 124(3). 314–324. DOI:  http://doi.org/10.1016/j.cognition.2012.05.014

Shook, Anthony & Viorica Marian. 2013. The bilingual language interaction network for comprehension of speech. Bilingualism: Language and Cognition 16(2). 304–324. DOI:  http://doi.org/10.1017/S1366728912000466

Spivey, Michael J. & Viorica Marian. 1999. Cross talk between native and second languages: Partial activation of an irrelevant lexicon. Psychological Science 10(3). 281–284. DOI:  http://doi.org/10.1111/1467-9280.00151

Steinbach, Markus & Edgar Onea. 2016. A DRT analysis of discourse referents and anaphora resolution in sign language. Journal of Semantics 33(3). 409–448. DOI:  http://doi.org/10.1093/jos/ffv002

Thierry, Guillaume & Yan Jing Wu. 2007. Brain potentials reveal unconscious translation during foreign-language comprehension. Proceedings of the National Academy of Sciences of the United States of America (PNAS) 104(30). 12530–12535. DOI:  http://doi.org/10.1073/pnas.0609927104

Van Assche, Eva, Wouter Duyck & Robert J. Hartsuiker. 2012. Bilingual word recognition in a sentence context. Frontiers in Psychology 3. 174. 1–8. DOI:  http://doi.org/10.3389/fpsyg.2012.00174

Van Hell, Janet G. & Judith F. Kroll. 2013. Using electrophysiological measures to track the mapping of words to concepts in the bilingual brain: A focus on translation. In Jeanette Altarriba & Ludmila Isurin (eds.), Memory, language, and bilingualism: Theoretical and applied approaches, 126–160. New York: Cambridge Scholars Publishing. DOI:  http://doi.org/10.1017/CBO9781139035279.006

Van Hell, Janet G. & Natasha Tokowicz. 2010. Event-related brain potentials and second language learning: Syntactic processing in late L2 learners at different L2 proficiency levels. Second Language Research 26(1). 43–74. DOI:  http://doi.org/10.1177/0267658309337637

Van Petten, Cyma. 1995. Words and sentences: Event-related brain potential measures. Psychophysiology 32(6). 511–525. DOI:  http://doi.org/10.1111/j.1469-8986.1995.tb01228.x

Villameriel, Saúl, Patricia Dias, Brendan Costello & Manuel Carreiras. 2016. Cross-language and cross-modal activation in hearing bimodal bilinguals. Journal of Memory and Language 87. 59–70. DOI:  http://doi.org/10.1016/j.jml.2015.11.005

Von Holzen, Katie & Nivedita Mani. 2012. Language non-selective lexical access in bilingual toddlers. Journal of Experimental Child Psychology 113. 569–586. DOI:  http://doi.org/10.1016/j.jecp.2012.08.001

Von Holzen, Katie & Nivedita Mani. 2014. Bilinguals implicitly name objects in both their languages: An ERP study. Frontiers in Psychology 5. 1415. 1–12. DOI:  http://doi.org/10.3389/fpsyg.2014.01415

Voss, Joel L., Heather D. Lucas & Ken A. Paller. 2010. Conceptual priming and familiarity: Different expressions of memory during recognition testing with distinct neurophysiological correlates. Journal of Cognitive Neuroscience 22(11). 2638–2651. DOI:  http://doi.org/10.1162/jocn.2009.21341

Weber, Andrea & Anne Cutler. 2004. Lexical competition in non-native spoken-word recognition. Journal of Memory and Language 50(1). 1–25. DOI:  http://doi.org/10.1016/S0749-596X(03)00105-0

Wilbur, Ronnie B. 2012. Information structure. In Roland Pfau, Markus Steinbach & Bencie Woll (eds.), Sign language. An international handbook, 462–489. Berlin: de Gruyter Mouton.

Woll, Bencie. 2001. The sign that dares to speak its name: Echo phonology in British Sign Language (BSL). In Penny Boyes Braem & Rachel Sutton-Spence (eds.), The hands are the head of the mouth: The mouth as articulator in sign languages, 87–98. Hamburg: Signum.

Woll, Bencie & Mairéad MacSweeney. 2016. Let’s not forget the role of deafness in sign/speech bilingualism. Bilingualism: Language and Cognition 19(2). 253–255. DOI:  http://doi.org/10.1017/S1366728915000371

Wu, Yan Jing & Guillaume Thierry. 2011. Event-related brain potential investigation of preparation for speech production in late bilinguals. Frontiers in Psychology 2. 114. 1–9. DOI:  http://doi.org/10.3389/fpsyg.2011.00114