1 Introduction

This paper focuses on tactile Italian Sign Language (LISt), the linguistic system used by the community of Italian Deafblind signers.1 LISt, as other tactile sign languages, is no natural language in the ordinary sense. Tactile sign languages virtually have no native signers. Indeed, although there are some individuals who are deafblind from birth and use a tactile sign language as their primary mode of communication, we know of no case of a Deafblind person who has been exposed to a tactile sign language right from birth by contact with Deafblind parents (or caretakers) or by living in a cohort of Deafblind peers.

Tactile sign languages are usually parasitic on visual sign languages, in the sense that they are often used by individuals who already know a visual sign language before losing sight. This fact suggests a natural perspective to approach the study of tactile sign languages. The general question being asked is: how is the visual language reshaped by the transition to the tactile modality? We know that visual sign languages make grammatical use of non-manual markers (NMMs), such as facial expressions, eye gaze, body posturing and head movement. The information conveyed by these markers is lost in the transition from the visual to the tactile modality, since they cannot be perceived by the addressee (this also leads to their gradual disappearance in the Deafblind signer). Moreover, one salient feature of sign languages is the use of space to convey information: in signed discourse, signers articulate some signs in the neutral space (roughly, the space in front of the torso) and the regions of space in which these signs are articulated are relevant to establish reference (Lillo-Martin & Klima 1990). In the transition toward the tactile modality, the size of the signing space is often reduced. This may depend on the fact that Deafblind signers communicate by keeping in constant physical contact with the interlocutor’s hand moving in the signing space, which makes the exchange more physically fatiguing. Reduction of the signing space might result in information loss or in a reconfiguration of how space can be used to mark linguistic phenomena.

When the transition to the tactile modality results in loss of information, we might expect tactile signers to make up for this loss by modifying some pre-existing manual items, or by introducing novel manual signs (or by combining some of these options).

These ways of reshaping the visual language are functionally motivated by the need to recover the information lost in the transition. Yet, as we will argue, the fact that tactile signers select a particular way of innovating, among others that are in principle available, is best explained not by the bare need to find effective ways of communication, but by an interaction of grammatical and perceptual constraints. For example, we will see cases where a communicative device that has been invented by interpreters because it seemed very effective is actually never used by Deafblind people, who use an alternative strategy which is not present in LIS, but is attested in other sign and spoken languages.

The main findings that emerge from our study are the following:

  1. Whenever a LIS construction stops being perceivable in LISt but another LIS construction that can convey the same or a similar meaning is available, the latter is systematically employed (conditionals are a clear example).

  2. When a specific meaning is conveyed by devices that are hard to detect haptically, LISt signers may invent a new lexical item to convey that meaning (for example, augmentative meaning is expressed by a sign which is not present in LIS and is the lexicalization of a gesture).

  3. If a closed class item becomes hard to perceive, its form can be modified (this is the case of pointing signs with a pronominal function).

  4. The newly introduced signs (and the modified signs) undergo the phonological processes familiar from LIS (assimilation being an example).

  5. Whenever a LIS construction stops being perceivable in LISt, grammatical innovation may intervene (yes/no questions are a case in point).

  6. Grammatical innovation follows paths well described in the diachronic syntax for spoken and sign languages, for example semantic bleaching.

  7. A recurrent change motivated by perceptual factors is the need to replace simultaneity with sequentiality, as the simultaneous presentation of information is sometimes harder to detect in the tactile modality than it is in the visuospatial modality.

All these changes (creation of new lexical items, grammaticalization, phonological assimilation etc.) are processes familiar from the literature on spoken and sign languages, but occur in LISt because they make signs easier to detect in the tactile modality.

This paper is organized as follows. In Section 2, we provide some information about deafblindness in general and about the community of Deafblind people in Italy to which our informants belong. In Section 3, we explain how we collected the data. In Section 4, we focus on the differences between LIS and LISt. In particular, in Section 4.1, we show how the loss of visual information is compensated by maximizing some resources that are already present in the visual sign language. In Section 4.2, we show that, in order to make up for the loss of visual information, the LISt lexicon is sometimes enriched by incorporating signs that do not belong to the LIS lexicon. As we will see, these new signs undergo phonological process, like perseverative assimilation. In Section 4.3, we focus on pointing signs. In Section 4.4, we report an innovation in the way questions are formed, which we argue to be a case of cross-modal grammaticalization, where a lexical LIS sign becomes a purely functional category in LISt. Crucially, both the phonological processes of assimilation and the grammaticalization process are innovations spontaneously (and unconsciously) introduced by Deafblind signers, as we will show. In Section 5, we investigate to what extent LISt can be considered a language independent from LIS and in Section 6 we draw some general conclusions.

2 Deafblind people and their means of communication

In this Section we offer some preliminary information on the Usher Syndrome, as this is the most common source of deafblindess, and we also explain how tactile communication works.

Usher Syndrome is a rare genetic disorder resulting in a combination of hearing loss and visual impairment; the vision loss results from retinitis pigmentosa, a degeneration of retinal cells that leads to early night blindness and the gradual loss of peripheral vision. Three subtypes of Usher Syndrome have been identified, but, as our informants suffer from Usher Syndrome of type I, we focus on this type. People with Usher Syndrome of type I are usually born deaf and lose their vision later on in life, typically showing the first visual symptoms in the first decade of their life. Moreover, they often have difficulties in maintaining their balance because of problems in the vestibular system.

Deafblind people use different languages and methods to communicate, that depend on what they learned or acquired during childhood (Mesch 2001). Many people with Usher Syndrome type I use a tactile form of sign languages because they have been exposed to visual sign languages during childhood. Typically, the transition from visual to tactile sign language is gradual and goes through the following stages:

  • Adapted visual sign language: since the disease involves a progressive reduction of the peripheral visual field with the outcome of a tunnel vision, the interlocutor must sign in a reduced signing space, between the upper part of the chest and the lower face and between the two shoulders.

  • Tracking method: the Deafblind person who still has residual vision holds the wrists of the interlocutor in order to maintain the signs within the visual field and receive information from the interlocutor’s movement. By this technique, the Deafblind person gets used to use sign language in a tactile mode. It is therefore considered a transition from the visual to the tactile reception of sign language.

  • Tactile sign languages, which will be the focus of this paper, are the final stage: Deafblind people, because of the visual impairment, adopt a full tactile mode both in production and in comprehension. As tactile sign languages require physical contact, communication takes place between no more than two signers at a time.2 As discussed by Mesch (2001), there are two basic positions which tactile signers can adopt: the monologue position and the dialogue position. In the dialogue position, the two signers sit across from each other and the dominant hand of each signer is under the non-dominant hand of the other signer. The dominant hand articulates the sign while the non-dominant one receives it by detecting the handshape, the orientation and the movement path of the dominant hand of the interlocutor. This allows signers to take turns rapidly. In the monologue position, which is typically used when one person talks to another for an extended time, each signer uses both hands to articulate signs or to receive them.

In addition to sign languages, deafblind people may have a different level of competence in the spoken language, which is still accessible in one of the following ways (see https://www.sense.org.uk/content/methods-communicating-people-who-are-deafblind for methods of communicating with deafblind people):

  • Block alphabet: the interlocutor, with a finger, writes capitalized letters of the alphabet on the palm of the Deafblind person.

  • Malossi tactile alphabet: different letters of the alphabet are indicated by touching or pinching different points of the hand of the Deafblind person. This system is used only in Italy. In many other countries, Deafblind people use a similar alphabet called “Lorm”.

  • Fingerspelling: it is the manual alphabet used in sign languages to spell names or words of spoken languages that do not have a correspondent in the sign language.

  • Tadoma, sometimes referred to as “tactile lipreading”: deafblind persons feel the movement of the lips, as well as vibration of the vocal cords, by placing their hands on the mouth, jaw and cheeks of the interlocutor who speaks. As this method is not used by our informants, we do not describe it in further detail.

3 Methodology of data collection

In this Section we give information about the informants who took part in this research and the elicitation methods we used to obtain the data.

3.1 Participants

Six Deafblind signers participated in our project. Five of them suffer from Usher Syndrome type I, that is they are deaf from birth and progressively lost their sight during adolescence. One is deafblind from birth for reasons other than the Usher Syndrome.

3.1.1 Participants with Usher Syndrome

At the time of data collection (from 2007 to 2010), four of the five participants with Usher Syndrome were totally blind and one had some residual vision (which, however, did not enable him to visually perceive LIS). Four of them were over 50 and one was 39. All five started using LIS before age 6 and were proficient signers before they began losing their sight.

These participants came from different areas of Italy: two were from the North, two from Rome, and one was from the Center (Senigallia). Thus, they were exposed to different varieties of LIS before they became blind. They are autonomous in their everyday life, and regularly meet Deaf friends at local Deaf clubs. Two of them have a job. They are all active members of the Lega del Filo d’Oro, a non-profit organization that offers several programs for Deafblind persons. Being involved in these programs, some of them are members of “Comitato delle Persone Sordocieche”, a consulting body composed only of Deafblind people, whose main task is to contribute to the activities promoted by the Lega del Filo d’Oro. For instance, the decision to start the project on which our research is based was taken by this committee. So, the request to study LISt came from the very participants in this study, who felt that a scientific investigation of their “language” would favor its recognition. During the collection of the data, at the beginning of each task, a professional LISt interpreter would explain to the participants the specific purpose of the activity and, after this, the participants gave their consent to participate and be filmed.

3.1.2 The participant who is deafblind from birth

The participant who is deafblind from birth has been first exposed to LISt directly around age 7. Before being exposed to LISt, she had been using a system of conventional domestic signs shared with members of her family from a very early age. At the time our study began, she was 21 and was attending school (later on, she obtained her high school diploma). She is from Perugia (Central Italy) and she is also an active member of the Lega del Filo d’Oro. She regularly interacts with other Deafblind signers.

3.2 Data collection

The data from Deafblind participants with Usher Syndrome were collected on different occasions. First, our informants were videotaped over a whole week (October 15–19, 2007): they gathered at the local branch of the Lega del Filo d’Oro in Lesmo (a small town near Milan), and we held recording sessions both in Lesmo and at the University of Milan-Bicocca. Then, we had sessions with the same informants two years later in Loreto (May 8–9, 2010). The participant who is deafblind from birth was videotaped in Milan over a two-day period (June 14–15, 2010). Finally, there was one more session in Milan in September 2011. We obtained about 35 hours of recordings. Each videotaping session involved a pair of signers. Each exchange was filmed by four cameras. One camera focused on one signer in the pair, another camera focused on the other signer, a third camera focused on both signers simultaneously, and a fourth camera focused on the active hand, which alternates between the two signers. The Figures 1, 2, 3, 4 show how the four cameras focused on the signers.

Figure 1
Figure 1

Focus on single signer.

Figure 2
Figure 2

Focus on single signer.

Figure 3
Figure 3

Focus on both signers.

Figure 4
Figure 4

Focus on active hand.

Only a fraction of the recordings (about 10 hours) has been analyzed up to now and a smaller portion has been annotated, because the process of annotating the videos is extremely time consuming, since at times all the videos from the four cameras must be used as resources to reconstruct a sentence. As a consequence, to analyze one minute of the exchange may take up to one hour. Furthermore, this work can be done only by the very few people who are both professional LISt interpreters and trained annotators.

3.3 Elicitation techniques

Free conversation in a natural environment is a rich source of linguistic material, but, due to time constraints (the participants were coming from different parts of the country and would gather together only for few days), it was unlikely that a corpus collected in this way could adequately cover all aspects of the language we wanted to investigate. Thus, while we did make use of free conversations to collect data, we also developed some alternative strategies to elicit a sizable number of tokens for the grammatical constructions we were interested in (typically, constructions that we expect to be more affected by the transition from visual to tactile modality).

Procedures in which a LISt signer is asked to give grammaticality judgments are not suitable to elicit data. This is due to the fact that Deafblind signers use the tactile variety only with other Deafblind signers: if the addressee is not blind (as is the case of interpreters), Deafblind participants tend to use the standard variety of LIS, relying on the tactile modality only when receiving information. Other standard procedures for assessing linguistic knowledge are also unhelpful: tasks like matching the picture that corresponds to a sentence or eliciting sentences to describe visually presented situations are obviously useless with Deafblind persons.

Depending on the specific aspect we wanted to investigate, we adopted different procedures to elicit data. Some of them adapt standard strategies to the tactile modality. For instance, instead of using pictures to describe situations, we presented situations by using toy props and let our informants explore them with the hands, as shown in Figure 5.

Figure 5
Figure 5

Exploring toy props.

In order to elicit specific constructions, we made our informants play games. We planned sessions to elicit polar (yes/no) and wh-questions, conditionals, negation (manual or non-manual), adverbs and classifier constructions.

To elicit yes/no-questions, we used a modified version of the “twenty questions game” (without imposing a twenty questions limit). In a typical instance of the game, one signer chooses an animal and the other signer must guess the animal by asking questions whose answers can be either “yes” or “no” (e.g., ‘Is it big?’ ‘Can it swim?’, etc.).

In the task for wh-questions, two Deafblind signers and an interpreter would be involved. The interpreter would present a scene with toy props to one signer and let him/her explore them manually. Then, some change would be introduced in the scene and the interpreter would let the other signer explore the modified scene. Finally, the interpreter would invite the second signer to ask the other about the part of the scene that was changed. For instance, in one case, one Deafblind signer manually explored a scene made up by three toy props representing three individuals in a row, with the middle one holding a shovel. Then, the shovel was removed and the other signer was allowed to explore the scene. At this point, the interpreter told this signer that one of the props used to hold a shovel and invited him to ask the other signer to find out which prop had it.

In order to elicit conditionals, we asked one Deafblind signer to describe to another Deafblind signer how to play a certain game (like chess). The rules of a game are naturally described by distinguishing different hypothetical cases and by stating what is to be done in each case: if this happens, then one can do this, if that happens, then…, etc.

In order to elicit adverbs, we adopted a modified form of the “telephone game”. Like in the English version, the point of this game is to preserve the original as much as possible both in terms of the content that is conveyed and in terms of the form used to convey it. We were interested in lack of preservation (violations of the rule of the game) which could possibly be motivated by the use of LISt. More specifically, the author of this paper who is a sighted and hearing native signer of LIS (namely, a hearing person raised by Deaf signing parents) signed to a Deafblind participant a story we created on purpose that contained many manner adverbs and degree modifiers. In LIS these are most often articulated by altering the movement component of verbs (sometimes, non-manual markings are used), therefore they can be analyzed as morphemes incorporated in the verb, rather than as independent signs. In the story signed to the Deafblind participant, most adverbs were expressed in “the LIS way”, namely by simultaneous means (by altering the movement component of verbs). The Deafblind person who received the story had to sign it to another Deafblind signer. This one signed it to a third Deafblind signer who in turn signed the same story to a fourth one. The last Deafblind signer to receive the story had to repeat it to the first Deafblind person, who had to identify the mistakes that had been made during the passages. The purpose of this task was to check whether the Deafblind signers would continue to use the simultaneous construction to express adverbial modification or whether they would prefer a sequential construction.

To sum up, we collected data both from free and elicited conversations. The elicited conversations were obtained by playing games, some of which involved the use of props to present scenes in a tactile modality.3

4 From LIS to LISt

In this Section, we describe the main changes from LIS to LISt that we have identified based on the analysis of a sample of the collected data, corresponding to about 10 hours of video recording. We start from the less surprising changes, which involve a very productive use of strategies that are attested, although used less often, in the visual language (Section 4.1). In Sections 4.2–4, we turn to genuine linguistic innovations, where LISt shows properties unattested in LIS. In all these cases, we argue that the process of transformation from LIS to LISt is grammatically governed.

4.1 Replacement of visual information by pre-existing items

As we pointed out in Section 1, one natural question which arises in investigating a tactile sign language is how Deafblind signers make up for the loss of information resulting from the fact that non-manual markers (NMMs) can no longer be perceived. Interestingly, only the signer who had residual vision occasionally used NMMs, the others did not use NMMs anymore. One natural expectation is that, whenever a manual sign is available that provides an alternative way to convey the information conveyed by an NMM, it will be used in place of the NMM. This is what we observed.

4.1.1 Conditionals

In LIS, the main device to signal a conditional consists in raising the eyebrows while the antecedent clause is signed manually. Thus, a conditional sentence like (1) is translated in LIS as in (2):4

(1) If it rains, I go out.
    1. (2)

However, in LIS there is also a manual sign for “if”, which may co-occur with the conditional NMM (IF consists of a G handshape, closed hand with forefinger extended, signed closed to the forehead, with the palm initially facing left, when the right hand is used, and the forefinger being moved to the right while rotating the wrist):

    1. (3)
    1. a.

Moreover, in addition to IF, LIS has several other manual signs, which may be glossed as EXAMPLE, IN-CASE, OCCASION, and which may be used to convey the type of information conveyed by if-clauses.

We were able to elicit many instances of hypothetical discourse in LISt by asking our participants to explain to each other the rules of different games (chess, card games, etc.). All hypotheticals were introduced by one of the manual signs mentioned above (all of them were used). Here is an example:

  b. LISt
    IF+++ IX-1 TAKE KING ALL DONE CLOSED IX-2 LOSE WIN IX-1 WIN
    ‘If I take the king is all over, you lose, I win.’

Arguably, these manual signs do not acquire any new grammatical function, but carry over to the tactile language the same grammatical functions they have in the visual language. Similar facts are reported by Collins (2004) for tactile ASL. For example, in ASL when-clauses may be indicated by an NMM consisting of an upwards tilt of the head and a raising of the eyebrows. The manual sign WHEN may co-occur with this NMM, but is not required in ASL. However, in tactile ASL the manual sign WHEN is present when a when-clause is introduced. In this respect the case of LISt is slightly different, as the standard strategy to convey if-clauses in LIS is by using NMMs only. Therefore, LISt signers generalized to any if-clause a strategy which is marked in LIS, the one with the overt marker IF.

4.1.2 Modifiers

In LIS manner adverbs and degree modifiers are often articulated by altering the movement component of verbs. For example, in order to translate sentence (4) in LIS, the manual sign EAT is performed repeatedly with a fast movement (we indicate this way of incorporating the adverb into the verb by means of the gloss in (5)). A separate manual sign for the adverb can also be used to express the same meaning (as indicated in (6)), although the option of incorporating the adverb into the verb is preferred:

(4) Gianni eats fast.
(5) LIS
  GIANNI EAT(fast)
(6) LIS
  GIANNI EAT FAST

In LISt the preference pattern is reversed: when a separate sign for the manner adverb is available, our Deafblind informants tend to use a sequential construction in order to express manner modification. Thus, in our LISt corpus we normally find occurrences like (7), which in LIS would be more commonly expressed by modification of the verb movement:

(7) a. LISt
    SUN BEAT-DOWN STRONG
    ‘The sun beats hard.’
  b. LISt
    TEMPERATURE HOT STRONG
    ‘The temperature was very hot.’
  c. LISt
    TIRED STRONG TIRED STRONG
    ‘(He) was very tired.’
  d. LISt
    HOT HEAVY
    ‘It was very hot.’

For example, while the adverbial modification in LISt sentence (7a) is expressed by the separate manual sign STRONG, in LIS the same meaning is usually expressed by altering the movement of BEAT-DOWN in the following way: the movement becomes extremely slow and it is produced with increased muscular tension, indicating greater intensity.

Since a difference (e.g. in speed or intensity) between various kinds of movement of the hands can in principle be perceived in the tactile modality, it is not immediately obvious why LISt users should disfavor the strategy consisting in altering the movement component of verbs to express manner adverbs and degree modifiers. However, this fact can be explained by looking at some further facts concerning adverbial modification in the visual language. Consider how English sentences (8)–(9) can be expressed in LIS:

(8) Gianni cut the onion.
(9) Gianni cut the onion finely.

As shown in Figure 6, the presence of the adverbial modifier “finely” is signaled mainly by the facial expression of the signer while the verb is signed.5 Given that in LIS, facial expressions commonly co-occur with the verb to express adverbial modification, it’s clear that the strategy of expressing adverbial modification by simultaneous rather than sequential means may result in loss of information if adopted generally in LISt. This may account for a general preference to use a separate sign for the adverb in LISt: given that the simultaneous strategy cannot be used consistently, LISt signers tend to consistently adopt the sequential strategy.6

Figure 6
Figure 6

Adverbial modification by NMM.

So far, the differences we have observed between LIS and LISt reflect the strategy of maximally exploiting the resources that are already present in LIS to avoid loss of information. The changes we observed may be entirely explained in terms of adaptive choices that systematically allow for an effective way of communicating, while no genuine linguistic innovation has been observed yet.

4.2 Emblem lexicalization

In some cases, adverbial modification involves the introduction of new items in the lexicon of LISt. One common gesture in Italian culture is a horizontal B handshape with the thumb up, the palm facing the signer, moving down and up repeatedly with a rotation of the wrist, as shown in Figure 7. It means the same as “very” or “much” and it is conventionalized, namely, it is an emblem, as it is called in the literature on gestures (see Kita 2001). For convenience, we gloss it as VERYMUCH, although we should emphasize that in LIS it is a gesture, not a sign:

Figure 7
Figure 7

The emblem VERYMUCH.

Although this gesture is occasionally found in visual LIS, our LIS informants perceive it as not being part of the lexicon of LIS, but as a gesture borrowed from the spoken culture. In LISt, VERYMUCH is often used, some occurrences are shown in (10) (as mentioned in footnote 4, we use ‘+’ to indicate repetition):

(10) a. LISt
    BEAUTIFUL VERYMUCH
    ‘Very beautiful.’
  b. LISt
    THIRST DRINK++ VERYMUCH
    ‘(He) was very thirsty.’
  c. LISt
    SUN HEAT VERYMUCH SUN HEAT
    ‘The sun was very hot.’

Crucially, in LISt, we observed a way of expressing augmentative meaning which we do not observe in LIS. For example, in (11), illustrated in Figure 8, the hand configuration in the expression glossed as WATER VERYMUCH is that of the noun WATER (5 handshape) and not that of VERYMUCH (B handshape), while the movement is that of VERYMUCH:

Figure 8
Figure 8

WATER VERYMUCH.

(11) LISt
  WATER VERYMUCH
  ‘A lot of water.’

In (11), the handshape of the sign WATER is retained by the item VERYMUCH, in a way similar to a phonological process of perseverative assimilation in spoken languages, in which a segment retains a feature of a previous segment (like the devoicing of the English plural morpheme /z/ when preceded by a voiceless consonant). After identifying this LISt innovation for a first time in one signer, we looked for similar cases to see if this strategy is used systematically and we found that it is. More specifically, we fully analyzed one session of the “telephone game” task. Three out of the four signers who participated in this task produced VERYMUCH with perseverative assimilation. They did it respectively five, two and four times in the time frame of approximately 3 minutes during which they reproduced the version of the story they had received. Another example produced by another Deafblind signer is:

(12) LISt
  ISLE SEA BEAUTIFUL VERYMUCH
  ‘The isle was very beautiful.’

Sentence (12) can be analyzed as another case of perseverative assimilation, since the hand configuration in the item glossed as BEAUTIFUL VERYMUCH is that of the adjective BEAUTIFUL (F handshape),7 while the movement is that of VERYMUCH.

Only one of the signers who took part in the telephone game did not use the emblem VERYMUCH, since he adopted a different strategy to express augmentation, namely he used the LIS sign MANY, which is normally used with count nouns, with an adverbial function which corresponds to the meaning of VERYMUCH.

(13) LISt
  SON HAPPY MANY
  ‘My son was very happy.’

One thing that needs to be explained is why VERYMUCH is used systematically and undergoes phonological assimilation in LISt, but not in LIS. As assimilation is a process available in LIS as well, the difference can be explained if VERYMUCH is part of the lexicon of LISt but not of LIS. The diachronic transition from emblem to sign is a well-known independently attested phenomenon in sign languages (Janzen 2012). This is also true for LIS, where some gestures of the hearing culture have been lexicalized, one example being the sign STEAL. One plausible reason why VERYMUCH has been lexicalized in LISt but not in LIS may be related to the tendency we observed to linearize modifiers in LISt: we argued that in LISt modifiers tend to be signed sequentially rather than simultaneously because adverbial modification by simultaneous means often involves NMMs (although simultaneous manual cues, like speed and intensity, could also convey the augmentative meaning). Given this tendency to linearize modifiers in LISt, it becomes natural for LISt signers to have a separate manual sign to express ‘very’ or ‘much’.

As the lexicalization of VERYMUCH is part of a process by which the information presented simultaneously in LIS is presented sequentially in LISt, it parallels changes in the lexicon of tactile ASL that are described by Edwards (2014). Edwards reports that innovations in the production of asymmetrical two-handed signs emerged in the context of the “pro-tactile” social movement in the Seattle community of Deafblind signers. Members of this movement decided to hold workshops among them without the mediation of interpreters and faced the challenge of communicating with multiple Deafblind interlocutors. In this context, a three-person mode of communication was established in which a Deafblind person signs to two interlocutors at the same time. In a three-person configuration, the dominant hand of the signer signs to interlocutor 1 while the non-dominant hand signs to interlocutor 2. Obviously, in the three-person configuration two-handed signs cannot be transmitted unless they are turned into one-handed signs. This is what happens. For example, the innovative form of the sign can be produced by having the same hand assuming first the role of the dominant hand in the original version of the two-handed sign, and then the role of the non-dominant one. Under this innovation, two configurations that were simultaneously produced by two hands become sequentially produced by just one hand.

In this Section we have argued that LISt users introduced a new manual sign in the lexicon, corresponding to an augmentative adverb. This allows information that is transmitted simultaneously in LIS to be transmitted sequentially in LISt. This transition is arguably motivated by perceptual constraints, as processing of simultaneous information is easier in the audiovisual channel than in the haptic one. Importantly, the new innovation interacts with phonology.

In the next Section we turn to a difference between LIS and LISt which involves a functional sign.

4.3 Change in pointing signs

Another area in which we studied the differences between LIS and LISt is the production of pointing signs.

In LIS, as in other visual sign languages, NPs are associated with locations in space, commonly called ‘(Referential) loci’. Either the NP is directly signed in the locus or, if this is not possible (for example because the noun is signed on the body of the signer), the association between the NP and the locus is done by pointing to, or directing the gaze towards, a specific point in space, which becomes the locus of the NP. If the referent of that NP is present in the utterance context, the pointing is towards its actual location. If the referent is not present, it is assigned a point in the neutral space. Each NP can be assigned a distinct location, and in principle each location can uniquely identify a referent. The point in the neutral space to which the index finger is pointing is relevant for anaphoric purposes, as the pointing sign may be construed with an NP that was previously signed in that point. Lillo-Martin and Klima (1990) suggested that the loci established by pointing signs are realizations of the indices carried by NPs: signs carrying the same index must point to the same loci, while signs pointing to distinct loci carry distinct referential indices.8 From this point of view, the association of NPs with a position in the signing space is a reflex of a grammatical requirement: every NP must carry an index. What differentiates sign languages from spoken languages is simply that indices are overtly realized in the former.

Whether pointing signs should be assimilated to pronouns remains a controversial issue. Clearly, they serve a pronominal function, both anaphorically and deictically. Furthermore, in principle the three-way distinction between first, second, and third person pronouns might be extended to pointing signs, because the index finger in the direction of the signer indicates first person, the index finger in the direction of the addressee indicates second person, and the index finger in the direction of a point different from signer and addressee might be taken to express the grammatical category of “third person”. However, there are non-trivial differences between pointing signs and pronouns in spoken languages (see Friedman 1975; Meier 1990; Lillo-Martin & Klima 1991; Liddell 1995; Meir 2002; Meier & Lillo-Martin 2010, among others). For one thing, the realizations of non-first-person pronouns are potentially infinite, as they can have the superficial form of signs pointing to any position in the neutral space. Therefore, one can argue that the form of these pronouns cannot be specified in the lexicon. Second, the set of locations to which pronouns referring to the addressee point and the set of locations to which pronouns referring to neither the signer nor the addressee point may overlap, thus the form of the pronoun alone does not mark second person from third person. For this reason, Meier (1990) argued that only a first/non-first-person distinction is expressed by pointing in ASL (this analysis leaves the possibility open that other sign languages have a three-term deictic system).

The production of pointing signs by LISt signers is an obvious area in which variation in LIS is expected, since finger pointing gestures are not observed in congenitally blind children (or, at any rate, they are extremely rare) and these children use other kinds of deictic gestures like palm pointing (cf. Fraiberg 1977; Hewes 1981; Iverson et al. 2000). One explanation, suggested by Iverson et al. (2000) for this behavior of blind children is that referent location by forefinger pointing, but not by palm pointing, is obtained by crossing the imaginary line indicated by the forefinger with the imaginary line indicated by eye gaze (and possibly head orientation).9 As blind children cannot produce or perceive eye-gaze, they do not use finger pointing gestures.

Whether Deafblind signers avoid finger pointing signs to refer to a third person for similar reasons needs to be investigated. Surely, they cannot produce or perceive eye-gaze, but they might recover information about the locus based on the location and orientation of the hand, so the use of pointing signs might still be informative, although the relevant information might be harder to obtain than in presence of eye-gaze.

The only existing study on this topic is Quinto-Pozos (2002), who compared how Deaf sighted ASL signers and Deafblind tactile ASL signers differ with respect to a narrative task involving elicitation of pointing signs. The result showed that, unlike the Deaf sighted signers, Deafblind signers never produced third person pronouns, while in some case produced first and second person (singular) pronouns.10 Instead of using third person pronouns, they either finger-spelled the name of the referent or used nouns like MOTHER, FATHER, GIRL, or the Signed English sign SHE (not a forefinger pointing sign). Quinto-Pozos’ findings are consistent with the hypothesis that eye gaze is necessary to locate a point in the neutral space. Deafblind ASL signers can still finger point at themselves and the addressee, as this is presumably easier to do, because of proprioception and because they are in tactile contact with the addressee.

Our study shows a more nuanced picture, as LISt signers do produce pointing signs to refer to non-first/second person referents. However, these signs differ in two respects from the way they are articulated in LIS:

  1. the hand configuration with the index finger is often substituted by a B, Ḃ, a bent B or a 5 configuration, as shown in Figure 9.

    Figure 9
    Figure 9

    B, Ḃ, bent B and 5 configuration.

  2. often, the hand does not point to the locus but it actually moves towards it. For example, as shown in Figures 10, 11, 12, to sign a first person pronoun the hand touches the signer, to sign a second person pronoun it touches the addressee, and to sign a third person pronoun it goes to the locus.

    Figure 10
    Figure 10

    Reference to the signer (“First person pronoun”).

    Figure 11
    Figure 11

    Reference to the addressee (“Second person pronoun”).

    Figure 12
    Figure 12

    Reference to a person who is not present (“Third person pronoun”).

How are these differences between LIS and LISt related to the transition from the visual language to the tactile language? Two explanations can be adopted here. The first explanation is fully linguistic, while the second builds on what we know about haptic perception.

In nutshell, the linguistic explanation is that LISt signers move their hand to a locus (instead of pointing to it) in order to meet the grammatical condition that NPs should be able to be overtly assigned an index in a situation in which pointing becomes more difficult. If this hypothesis is right, this tells us something about the lively debate about whether the traditional first, second and third person distinction can be extended to pointing signs. If pointing signs were merely devices to introduce person distinctions, there would be an easy and efficient way to do so: palm pointing gestures, as we saw, are used by congenitally blind children. So, in principle, first, second and third person could be marked by using the direction of palm pointing: palm in the direction of the signer for first person, in the direction of the addressee for second person, and in any other direction for third person. However, Deafblind LISt signers choose to produce pointing signs by moving the hand to different points of the signing space. It seems plausible that they choose to do so because, at an abstract level, this is precisely how the pronominal system of LIS, their visual sign language, works: in the LIS pronominal system, pronouns are contrasted by being associated with different points of the signing space, and the changes LISt signers introduce with respect to pronouns are aimed at preserving this feature of their pronominal system. Another way to put it is that the pronominal system of LIS, and in general of visual sign languages, does more than (or perhaps something different from) introducing a distinction between first, second and third person: it marks the difference among speaker, addressee and other referents, but, at the same time, it provides a way of marking coreference.

A second way to make sense of the modifications in the production of pointing signs by Deafblind signers stems from studies about haptic perception. A preliminary caveat is necessary though. Studies of haptic perception by Deafblind signers are exceedingly rare, and hypotheses emerging from studies of haptic perception in the general population or even in blind individuals can be extended to deafblind signers only tentatively, as the extensive use of a tactile language might influence haptic perception (cf. Papagno et al. 2016). Having said that, there are findings in this literature that are potentially interesting for our issue (cf. Kappers & Bergmann Tiest 2016 for an overview). For example, under the experimental conditions described by Hollins & Kelley (1988), blind individuals and blindfolded sighted subjects were first requested to explore the position of objects on a table and then they were asked to recall these positions either by pointing to them or by placing objects again in the original positions. Blind individuals (but not blindfolded sighted subjects) were less accurate when they had to point. This introduces some analogies with our findings, since it seems that (deaf)blind individuals are better at reaching out a certain position than at indicating it from a distance. Another potential relevant finding is that, as observed by Pawluk et al. (2011), decoding haptic information (in the case at hand, distinguishing Figure from Ground) is more efficient if an object moves than if it stays still. So, the hand moving to the locus might offer bigger cues to the deafblind person who “receives” the pointing sign.

One possibility we would like to suggest is that these factors we considered as driving the modifications of pointing signs by Deafblind signers are both at work, namely that the innovative use of pointing signs by LISt signers is the result of a complex interplay of perceptual factors (a locus is haptically easier to detect if the hand moves there than if it points to it) as well as grammatical factors (the need to respect the requirement that indices be overtly expressed).

In the next Section we switch to a further case of linguistic innovation: the change from LIS to LISt that may be compared to processes that in spoken and in sign languages go under the label “grammaticalization”.

4.4 Cross-modal grammaticalization

In this Section we will show that an interrogative sign (WHAT) is used in LISt in contexts in which it cannot be used in LIS. We will show this by comparing the new LISt data to corpus of LIS, in which interrogatives have been annotated. We will argue that this innovative use is an example of cross-modal grammaticalization.

We start by introducing some background information on question formation in LIS. Polar questions (yes/no-questions) are distinguished from affirmative sentences in LIS (as in many other sign languages) only by an NMM which consists mainly in raised eyebrows:

(14) LIS
  GIANNI CALL DONE
  ‘Gianni called.’
    1. (15)

As for wh-questions, LIS has a full set of wh-words: the signs WHO, WHAT, WHEN, WHERE, WHY, WHICH and HOW-MANY. The canonical position for these signs is at the right periphery of the sentence, no matter what grammatical function the wh-item plays. Thus, both the sign WHAT in (16), which is the object of the verb BUY, and the sign WHO in (17), which is the subject of the verb SIGN, appear in sentence final position. As indicated in the glosses below, wh-questions are also associated with a specific NMM (roughly, lowered eyebrows), which is obligatorily co-articulated with the wh-phrase. LIS signers tend to restrict the NMM to the wh-phrase (as in (16)–(17) below), although the NMM may also extend to a bigger portion of the clause, as explained by Cecchetto et al. (2009).11

    1. (16)
    1. (17)

Structures like (18), in which a wh-phrase moves to the left periphery, are judged ungrammatical by our informants:

(18) a.    LIS
    *WHAT GIANNI BUY
  b.   LIS
    *WHO CONTRACT SIGN

Let’s now turn to LISt. In the data we collected, we found three types of questions: wh-questions, polar questions, and alternative questions (the counterpart of an alternative question in English would be “Did John invite Mary or Paul?”). Since Deafblind signers cannot perceive facial expressions, and polar questions are distinguished from declarative sentences only by a facial NMM, one expects that, if LIS has an alternative way of signaling polar questions that does not require a facial NMM, Deafblind signers will make use of it. Indeed, questions in LIS may be introduced by signing the inflected form 1ASK2 (‘I ask you’) at the beginning of the sentence, as illustrated in Figure 13, and our Deafblind informants make use of this option in some cases (here, person inflection is realized by moving the sign ASK from the signer to the addressee).

Figure 13
Figure 13

1ASK2.

Collins & Petronio (1998) report that Deafblind signers of tactile ASL use a similar strategy to express polar questions. In ASL, polar questions may be signaled either by an NMM consisting of raised eyebrows, widening of the eyes, forward tilting of head and body, and possibly raised shoulders, or by a manual sign glossed as QUESTION which occurs at the end of the sentence and consists of a crooked index finger wiggling (this manual sign adds additional meaning in ASL, since it is emphatic). In tactile ASL, Deafblind signers form polar questions by using the manual sign QUESTION at the end of the sentence. The strategy of ASL tactile signers here is thus the same adopted by LISt signers when they use 1ASK2 sentence-initially to mark a polar question.

1ASK2, might be on a par with the NGT sign CALL which has been hypothesized to be a marker of direct speech by Bos (2016).

The use of 1ASK2, however, is not the only strategy LISt signers use in order to indicate interrogative force and in fact it is not the most interesting one for our purposes in this paper. To show how the other strategy works, let us focus on the use of the wh-sign WHAT in LISt. We found four different uses of WHAT:

  1. the “canonical use” of WHAT,

  2. the redundant use of WHAT,

  3. the use of WHAT in alternative questions,

  4. the use of WHAT in polar questions.

Below are some LISt examples illustrating each use (we follow the convention of using “/” to indicate the occurrence of a pause, the material in parentheses in (20) indicates the discourse preceding the example, finally we use the superscript “gesture” to indicate that what is being glossed is not a sign of LIS but a gesture):

(19) Canonical use in LISt
  MISS ONE/SECOND WHAT
  ‘One (thing) is still missing, what is the second one?’
(20) Redundant use in LISt
  (WHICH ANIMAL WAITgesture/ANIMAL WAITgesture
  NICE\WHICH NICE ANIMAL\)
  NICE WHICH WHAT
  ‘Which animal is nice?’
(21) Alternative question use in LISt
  LITTLE BIG WHAT. LITTLE BIG
  ‘Is it small or big? Small or big…’
(22) Polar question use in LISt
  MUM SIGN WHAT
  ‘Did your mother sign?’

In all these examples, WHAT is located at the right periphery of the clause, as in LIS. In (19) WHAT fulfills its standard role as an argument of the verb (the phonologically unrealized copula). In (20), WHAT is redundant: it does not fill the argument structure of the predicate NICE and the interrogative force of the utterance can already be inferred from the presence of the NP WHICH ANIMAL. Sentence (21) is an alternative question and, again, the function of WHAT is that of signaling interrogative force. The most interesting case for our current purposes is illustrated by sentence (22) and Figure 14, a polar question where WHAT is used to mark interrogative force.12

Figure 14
Figure 14

MUM SIGN WHAT.

Concerning the use of WHAT in (22), one natural question that arises is whether we are dealing with a sort of tag question. In this case, (22) would be made up by two clauses: the polar question MUM SIGN and an independent elliptical clause to which WHAT belongs. According to this hypothesis, (22) would have the structure in (23):

(23) LISt
  [MUM SIGN] [WHAT]tag
  ‘Your mother signed, right?’

If this were the correct analysis, however, we should expect some prosodic cue to signal that WHAT belongs to a different clause in (22). However, no intonation break occurs and we could detect no other prosodic cue that might indicate a clause boundary. We then conclude that (22) is a mono-sentential polar question in which WHAT is a marker of interrogative force. Analogous uses of WHAT in LIS are non-existent. We investigated this by checking a LIS corpus collected in ten Italian cities during the years 2009–2010 (this corpus is described in Cardinaletti et al. 2011). The corpus consists of 165 hours of videos, of which 21 are dedicated to the question/answer task. As the latter part of the corpus is transcribed, we could inspect 7 hours of videos from 82 signers distributed over different cities and belonging to the same age group as our LISt informants for this task. In these videos, WHAT occurs in sentence final position in a number of cases that are not analyzable as canonical WHAT questions. For example, we found occurrences in alternative questions, exclamative uses of WHAT, and instances of WHAT doubling other wh-words like WHERE (see Geraci et al. 2015 for a more complete description). However, and most crucially, in over 7 hours of videos from 82 signers, we only found two occurrences of sentence final WHAT in constructions that, although of dubious interpretation, could be interpreted as polar questions (both occurrences were by the same signer from Salerno in Southern Italy). So, we conclude that there is no established use of WHAT as an interrogative marker for polar questions in LIS and that this is a genuine innovation introduced by LISt signers.

The use of WHAT in LISt polar questions is a robust phenomenon: when scanning the videos for the question task elicitation, out of the first 87 polar questions we encountered in the LISt corpus about a third (32.18%) contained WHAT at the right periphery. Interestingly, the LISt signer who is deafblind from birth was never exposed to LIS and yet she uses WHAT as a marker of interrogative force in polar questions. Here are some examples she produced, shown in Figure 15 (they were clearly polar questions, since she was playing a game in which she was trying to guess an object possessed by the addressee, and she accepted the answer “no” to both questions):13

Figure 15
Figure 15

ASK2 IX2 CAR WHAT.

(24) LISt
  IX-2 SKIRT WHAT
  ‘Do you have a skirt on?’
(25) LISt
  IX-2 CAR WHAT
  ‘Do you have a car?’

Thus, our conclusion is that WHAT may be used as a marker of interrogative force in LISt: sentences (22), (24), (25) (and, perhaps, (21) as well) are plausibly analyzed in this way, since the only function of WHAT in these sentences is that of signaling that they are not declarative sentences.

It is important to realize that use of WHAT for this purpose, while it is unattested in LIS, is an instance of a linguistically attested strategy to express interrogative force. As is well-known, spoken languages have developed several ways to mark questions. In Italian, polar questions are formally distinguished from declarative sentences by intonation, while in English the same distinction is marked by word order (and intonation):

(26) a. Declarative in Italian
    Sei felice.
    ‘You are happy.’
  b. Question in Italian
   
    ‘Are you happy?’
  c. You are happy.
  d. Are you happy?

In other languages, polar questions are derived from declarative sentences by adding an interrogative particle. An example of this strategy is provided by Tzotzil, a Mayan language spoken in Southeastern Mexico which is discussed by Konig & Siemund (2007).

In many languages, interrogative particles occur in wh-questions as well (see Dryer 2011 for the position of polar question particles in spoken languages and Zeshan 2011 for an overview on question particles in sign languages). Japanese is a well-known example. In (27) below (from Ishihara 2002) the clause final particle ‘no’ marks wh-questions:

    1. (27)
    1. Japanese
    1. Naoya-ga
    2. Noaya-NOM
    1. nani-o
    2. what-ACC
    1. nomiya-de
    2. bar-LOC
    1. nonda no?
    2. drankQ
    1. ‘What did Naoya drink at the bar?’

In view of these observations, it is easier to make sense, from a linguistic standpoint, of the overuse of WHAT in LISt. The use of WHAT in polar question in LISt, unattested in LIS, can be naturally described by saying that WHAT in LISt is an interrogative particle. This is supported by the fact that polar question markers homonymous with the word corresponding to ‘what’ are cross-linguistically attested. For example, Bengali and Kannada, as discussed by Konig & Siemund (2007), are similar to LISt in this respect. Moreover, in LISt the use of WHAT as a particle to signal interrogative force extends to alternative questions and also to wh-questions, where it plays a redundant role.

Two further observations are in order. First, it should be stressed that the use of WHAT as an interrogative particle, while it originates from the need to compensate for the impossibility to perceive the facial NMM for polar questions, cannot be explained simply in terms of functional considerations. As we saw, LIS has an alternative way to signal interrogative force manually (1ASK2), and Deafblind LISt signers do exploit it. Moreover, from a purely functional point of view, sentence-initial positioning of the particle should be possible, and indeed preferable, since the addressee would have an early warning about sentence type. In fact, as we saw, WHAT, even when used as an interrogative particle, occurs at the right periphery of the sentence, as wh-items do in LIS. This indicates that the use of WHAT as an interrogative particle is grammatically ruled.

The second observation, already anticipated above, is that, when WHAT is used as a particle, it loses part of its lexical meaning (the meaning of the contentful word ‘what’) and serves as a grammatical indicator of interrogative force. In this sense, the use of WHAT is a case of grammaticalization. Grammaticalization is a well-attested diachronic process which is found in sign languages as well (cf. Pfau & Steinbach 2011). What is special about the case of WHAT is that it involves a cross-modal change, in which the shedding of lexical meaning occurs in the passage from a language in the visual modality to a language in the tactile modality. To the best of our knowledge, this is the first time that this phenomenon has been reported.

A natural question is whether the innovations we are talking about are the result of an explicit decision by the community of Deafblind signers or have been unconsciously developed by the members of this community through spontaneous interaction. We can answer this question. The members of this research team reported early finding about the use of WHAT in polar question to an assembly of Deafblind signers who wanted to be informed about the progress of our research (remember that our study began when “Comitato delle Persone Sordocieche” of Lega del Filo d’ Oro contacted us because they thought that a research on LISt at the academic level would benefit the status of their language). This meeting was organized by Lega del Filo d’Oro on May 8–9 2010 in Loreto (Central Italy), where about twenty Deafblind signers from different parts of Italy met. The five informants filmed for this study took part in the meeting and were initially surprised to be informed that they had been consistently using WHAT as an interrogative particle in polar question. So, they were not aware of the innovative grammatical strategies they had been using. However, after the 2010 Loreto meeting, which many LISt interpreters attended, this use of WHAT has entered the interpretation practice.

5 A fully-fledged language?

We mentioned at the outset that tactile sign languages are no natural languages in the ordinary sense. They virtually have no native signers and they are most often acquired by signers competent in a visual sign language who can no longer rely on the grammatical system of the visual language as it is, since some of its features are no longer perceivable due to the loss of vision.

One issue these observations raise (pointed out to us by an anonymous reviewer) is whether it is appropriate to describe them as distinct languages. For example, nobody would accept that a new (visual?) language is created if certain aspects of articulation of a spoken language are exaggerated to make them more visible to a lip reader. Similarly, nobody would say that English is a distinct language when English words are whispered or shouted from a far distance. Finally, we would not regard Malossi, the communication system which spells different letters of the alphabet by touching or pinching different points of the hand, as a new tactile language (it is a writing system). How is LISt different from these cases? And, even more centrally, can a fully-fledged language exist in the tactile modality?

The observation that signing in LISt is more fatiguing than visual signing might be construed as evidence that LISt is not optimized for the tactile modality and in this sense is not a fully developed tactile language. Indeed, in view of what we know about LISt, it may very well be the case that the transition to a tactile language in the full sense is still in development. Yet there are some clues indicating that a transition to a fully developed tactile language is in progress. First, we could observe that LISt is naturally used in lengthy conversations on a variety of arguments, it is used to transmit information, to discuss, to joke, namely it is used for all the purposes a fully developed natural language is used. Notice, moreover that one of the Deafblind participants in the project, namely the participant who is deafblind from birth, acquired LISt directly as a tactile system (though she was not exposed to it from birth) and this became her primary mode of non-written communication, which she uses for everyday needs and for exchanges with other Deafblind persons. While this is no conclusive evidence that LISt is a tactile language in the full sense, it is however an indication that LISt is a natural mode of expression for Deafblind people, whether or not they were previously competent in a visual sign language, and this is a feature that a full tactile language should have. Finally, although LISt may not be fully optimized for the tactile modality, there are several indications that a process in this direction is taking place. For instance, the fact that LISt signers reduce the signing space to minimize fatigue is an indication that a process toward optimization is in progress. Generally speaking, the repair strategies we investigated in the transition from LIS to LISt, including the grammatical innovations introduced by LISt signers, may be seen as part of this process. This is where the parallel with shouting or whispering, exaggerated mouthing and Malossi breaks down: these uses do not involve the type of grammatical innovations we described for LISt, which include syntactic changes (in interrogatives) and phonological ones (in the innovative production of pointing signs in LISt, with changes at the segmental level which involve at least two formational parameters).

So, our conclusion concerning the status of LISt is that at the moment it is moving toward a full tactile language. Whether this transition will be successful is a separate issue, which also depends on sociolinguistic factors such as the number of people that will be using LISt in the future, to which extent they will form a cohesive community, and so on. What we could observe is that LISt is striving toward that goal and that there are indications that a full transition is possible in principle, namely a full sign language in the tactile modality can emerge.

6 Conclusion

Our study focused on the strategies that, in switching to the tactile modality, Deafblind signers adopt to compensate for those grammatical features of the visual sign language that can no longer be perceived.

In principle, one possible choice when using the tactile language is simply to avoid those grammatical constructions of the visual language that make use of markings that can only be perceived visually and exploit the manual resources of the visual sign language by using constructions that are equivalent for communicative purposes. As we saw, the strategy of “replacing” constructions that make use of visual NMMs with other constructions that don’t but are functionally equivalent is definitely one strategy that Deafblind signers use (as the production of conditionals, adverbs and questions in LISt shows). However we also saw that LISt Deafblind signers innovate. Most crucially, in order to explain why certain innovations are chosen among those that are in principle possible to compensate for the loss of visual NMM, one must appeal to an interaction between grammatical constraints and the need to make signs easier to perceive in the tactile modality For example, the lexicalization of the emblem VERYMUCH is related to a general tendency for linguistic information to be presented sequentially rather than simultaneously in the tactile modality, due to the loss of the visual channel. In this specific respect, LISt becomes more similar to spoken languages, where modification is typically expressed by an independent word.

However, other examples of innovation seem to respect intrinsic properties of sign languages, which are therefore maintained in the transition to the tactile modality. In the case of questions, the use of WHAT in LISt as a particle to indicate interrogative force obeys a grammatical rule of LIS, which forces interrogative items to be moved at the right periphery of the clause. Moreover, the use of WHAT as a marker of interrogative force seems to be a special case of the process of grammaticalization, by which a lexical item loses part of its lexical meaning to serve a purely grammatical function. As a result, a new item is introduced whose meaning and grammatical function are different from those of the original form (the process of grammaticalization is of course well attested in both spoken and sign languages).

All in all, in some cases the change we observe in LISt makes it more similar to spoken languages than LIS is (sequentiality). In other cases, the particular direction the change takes is constrained by the need to hold on to the grammatical properties of LIS in the mutated condition (WHAT). However, no matter what the direction of change is, even in the most extreme circumstances, grammatical constraints, rather than a generic need to communicate, play a role in the transition to the tactile modality. This is confirmed by another observation: LISt interpreters sometimes signal polar questions by drawing a question mark in the signing space. It is significant that Deafblind LISt signers never replaced the NMM for polar questions with this gesture. We think that Deafblind signers did not endorse this strategy because it involves the use of an artificially created symbol, which is not even a gesture in the Italian community. This graphic symbol cannot be easily incorporated in the sign stream despite its communicative transparency.

In conclusion, our findings support the view that the language instinct is fundamentally a-modal. Fifty years of research on sign languages have shown that full languages can develop in the visuo-spatial modality. Our study of LISt suggests that language has the potential to fully develop in the tactile modality as well, at least when this development builds on previous knowledge of a visual sign language.

Abbreviations

ACC = accusative, NOM = nominative, LOC = locative, Q = wh-question particle, NMM = non-manual markings, subscripts 1, 2, 3 = first, second, third person inflections, X-Y= signs which require two or more words in the English glosses, + = sign repetition.

Notes

  1. We adopt the convention of using “Deafblind” (with capital “D”) to refer to individuals who are deaf and blind and use tactile sign language as their primary mean of communication. We use “deafblind” to refer to individuals who are deaf and blind. [^]
  2. Edwards (2014: 31) reports that tactile ASL can be used in a three-person configuration inside the Seattle community of Deafblind signers (see Section 4.2). This possibility is not attested among Italian Deafblind people, as far as we know. [^]
  3. We briefly mention two further elicitation tasks we ran, targeting negative structures and classifiers handshapes (see Emmorey 2003 for a general presentation of classifiers in sign languages), but these data will not be analyzed in the present study. In the elicitation session for classifiers, each participant explored a scene in which a toy prop had to move around some obstacles. The task was to sign the path of the prop to another Deafblind signer by using LISt, and then to an interpreter by using LIS. In this way, we could track possible differences between the use of classifiers in the visual language and in the tactile language. In the elicitation session for negation, one signer was asked to tell a story about a famous person. The story contained some obvious errors. The task of the addressee was to point out the errors. In this way, one Deafblind signer would elicit denials of some statements from the other Deafblind signer. [^]
  4. We adopt the standard conventions of glossing signs with words in capital letters. The non-manual markings (NMMs) are indicated by a line over the sign glosses with some articulatory information. Subscripts 1, 2, 3 indicate first, second, and third person inflections. Hyphen is used to indicate signs which require two or more words in the English glosses. The “+” symbol indicates repetition. [^]
  5. Something similar occurs in LIS also with intensifying adverbs like “strong”, which may be expressed by using a facial expression:
    (i) LIS
      SUN BEAT-DOWN-STRONG
      ‘The sun beats hard.’
    [^]
  6. Collins (2004) reports that in ASL the manual sign STRONG may be used as an adverb and it is co-articulated with a characteristic facial expression (lowered eyebrows), which may also be used alone to convey the same meaning. Collins notes that in tactile ASL, on the other hand, only the manual sign STRONG is used, but “a prolonged hold segment” is inserted, presumably, in order to have a manual equivalent of the NMM of STRONG. The transition from LIS to LISt, as far as the adverbial for “strong” is concerned, seems to be more conservative. In LIS, this adverbial is lexically specified for a specific NMM (frowning). Although, unlike for ASL, this lexical NMM cannot be used alone to convey the same meaning, LIS signers display a rich repertoire of NMMs (squinted eyes, rounded lips, etc.), whose intensity can be modulated to reflect the degree of strength. Variation of degree may also be expressed by modulating the movement of the manual sign (slow movement with strong muscular tension indicating a higher degree) or by repeating it. As one would expect, in LISt NMMs are nearly absent, while modulation of the movement or repetition to express degrees of strength seems to be preserved. For further discussion of other types of adverbial modification in tactile ASL, see Collins & Petronio (1998) and Collins (2004). [^]
  7. F handshape looks like follows: [^]
  8. See also Lillo-Martin (2002) and Schlenker (2011) for a discussion of donkey anaphora based on the assumption that loci are the overt realizations of referential indices. See Cormier, Schembri & Woll (2013) for an alternative view and for the claim that forefinger pointing in sign languages should not be analyzed as pronouns. [^]
  9. See also Butterworth (2003) for a discussion of alternative ways in which vision may be a condition for pointing. [^]
  10. We stick to Quinto-Pozos’s terminology, who defines a third person pronoun as “the use of a point to the left or to the right of the signing space to establish/indicate an arbitrary location in space that is linked to a human referent who is not physically present”. [^]
  11. Based on Cecchetto et al. (2009) and many other works on questions in sign languages (cf. Cecchetto 2012 for an overview), we assume that NMMs expressed by lowered eyebrows play a syntactic role directly. However, see Sandler (2011) for an alternative view. [^]
  12. This use of WHAT may be similar to that of the Swedish Sign Language sign HUR reported in Mesch (2001). In Mesch’s data, this sign, besides occurring with the meaning of ‘how’, also occurs in segments that function as polar questions. [^]
  13. The sign WHAT in Figure 15 is a variant of the sign WHAT used in Figure 14. A variation of the same type has been documented for visual LIS in Geraci et al. (2015). For this informant (and only for her) we analyzed data in which she signs to a LISt interpreter, as her visit to Milan did not coincide with the visit of other Deafblind signers. [^]

Ethics and Consent

The pictures of Deafblind signers are published with their consensus and other persons depicted in the pictures also give their consensus. The collection of the data was part of an agreement between University of Milan-Bicocca and Lega del Filo d’Oro.

Acknowledgements

We thank the Lega del Filo D’Oro for supporting our research. We also thank our informants Francesco Ardizzino, Maria Costanza Bacianini, Maurizio Casagrande, Pino Gargano, Amerigo Iannola, Alessandro Romano. Finally, we thank Maria Teresa Guasti for helping us to design the elicitation methods. This paper has been possible thanks to the SIGN-HUB project, which has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 693349.

Competing Interests

The authors have no competing interests to declare.

References

Bos, Heleen F. 2016. Serial verb constructions in Sign Language of the Netherlands. Sign Language & Linguistics 19(2). 238–251. DOI:  http://doi.org/10.1075/sll.19.2.04bos

Butterworth, George. 2003. Pointing is the royal road to language for babies. In Sotaro Kita (ed.), Pointing: Where language, culture, and cognition meet, 9–33. Mahwah, NJ: Lawrence Erlbaum Associates.

Cardinaletti, Anna, Carlo Cecchetto & Caterina Donati (eds.). 2011. Grammatica, lessico e dimensioni di variazione nella LIS [Grammar, lexicon and types of variation in LIS]. Milano: Franco Angeli.

Cecchetto, Carlo. 2012. Sentence types. Roland Pfau, Markus Steinbach & Bencie Woll (eds.), Sign language. An international handbook (HSK – Handbooks of linguistics and communication science), 292–315. Berlin: Mouton De Gruyter.

Cecchetto, Carlo, Carlo Geraci & Alessandro Zucchi. 2009. Another way to mark syntactic dependencies: The case for right-peripheral specifiers in sign languages. Language 85(2). 278–320. DOI:  http://doi.org/10.1353/lan.0.0114

Collins, Steven D. 2004. Adverbial morphemes in Tactile American Sign Language. Cincinnati, OH: Union Institute and University dissertation.

Collins, Steven D. & Karen Petronio. 1998. What happens in tactile ASL? In Ceil Lucas (ed.), Pinky extension and eye gaze: Language use in Deaf Communities, 18–36. Washington, D.C.: Gallaudet University Press.

Cormier, Kearsy, Adam Schembri & Bencie Woll. 2013. Pronouns and pointing in sign languages. Lingua 137. 230–247. DOI:  http://doi.org/10.1016/j.lingua.2013.09.010

Dryer, Matthew S. 2011. Position of polar question particles. In Matthew S. Dryer & Martin Haspelmath (eds.), The world atlas of language structures online. Munich: Max Planck Digital Library. http://wals.info/chapter/92.

Edwards, Terra. 2014. From compensation to integration: Effects of the pro-tactile movement on the sublexical structure of Tactile American Sign Language. Journal of Pragmatics 69: 22–41. DOI:  http://doi.org/10.1016/j.pragma.2014.05.005

Emmorey, Karen (ed.). 2003. Perspectives on classifier constructions in sign languages. Mahwah, NJ: Lawrence Erlbaum & Associates.

Fraiberg, Selma. 1977. Insights from the Blind. New York: Basic Books.

Friedman, Lynn A. 1975. Space, time, and person reference in American Sign Language. Language 51(4). 940–961. DOI:  http://doi.org/10.2307/412702

Geraci, Carlo, Robert Bayley, Anna Cardinaletti, Carlo Cecchetto & Caterina Donati. 2015. Variation in Italian Sign Language (LIS): The case of wh-signs. Linguistics 53(1): 125–151. DOI:  http://doi.org/10.1515/ling-2014-0031

Hewes, Gordon W. 1981. Pointing and language. In Terry Myers, John Laver & John Anderson (eds.), The cognitive representation of speech, 105–130. Amsterdam: North Holland. DOI:  http://doi.org/10.1016/S0166-4115(08)60201-0

Hollins, Mark & Elisabeth K. Kelley. 1988. Spatial updating in blind and sighted people. Perception & Psychophysics 43. 380–388. DOI:  http://doi.org/10.3758/BF03208809

Ishihara, Shinichiro. 2002. Invisible but audible wh-scope marking: Wh-constructions and deaccenting in Japanese. In Line Mikkelsen & Christopher Potts (eds.), Proceedings of the West Coast Conference on Formal Linguistics (WCCFL) 21. 180–193. Somerville, MA: Cascadilla Press.

Iverson, Jana M., Heather L. Tencer, Jill Lany & Susan Goldin-Meadow. 2000. The relation between gesture and speech in congenitally blind and sighted language learners. Journal of Nonverbal Behavior 24. 105–130. DOI:  http://doi.org/10.1023/A:1006605912965

Janzen, Terry. 2012. Lexicalization and grammaticalization. In Roland Pfau, Markus Steinbach & Bencie Woll (eds.), Sign language. An international handbook (HSK – Handbooks of linguistics and communication science), 816–841. Berlin: Mouton De Gruyter.

Kappers, Astrid M. L. & Wouter M. Bergmann Tiest. 2016. Haptic saliency. Scholarpedia of Touch, Scholarpedia. DOI:  http://doi.org/10.2991/978-94-6239-133-8_14

Kita, Sotaro. 2001. Gesture in linguistics. International Encyclopedia of the Social & Behavioral Sciences, 6215–6218. Amsterdam: Elsevier Science Publishers.

Konig, Ekkehard & Peter Siemund. 2007. Speech act distinctions in grammar. In Timothy Shopen (ed.), Language typology and syntactic description, 276–324. Cambridge: Cambridge University Press. DOI:  http://doi.org/10.1017/CBO9780511619427.005

Liddell, Scott K. 1995. Real, surrogate, and token space: Grammatical consequences in ASL. In Karen Emmorey & Judy Reilly (eds.), Language, gesture, and space, 19–41. Hillsdale, NJ: Lawrence Erlbaum Associates.

Lillo-Martin, Diane. 2002. Where are all the modality effects? In Richard P. Meier, Kearsy Cormier & David Quinto-Pozos (eds.), Modality and structure in signed and spoken Languages, 241–262. Cambridge: Cambridge University Press. DOI:  http://doi.org/10.1017/CBO9780511486777.013

Lillo-Martin, Diane & Edward S. Klima. 1990. Pointing out differences: ASL pronouns in syntactic theory. In Susan D. Fischer & Patricia Siple (eds.), Theoretical Issues in Sign Language Research, Vol. 1: Linguistics, 191–210. Chicago, IL: University of Chicago Press.

Meier, Richard P. 1990. Person deixis in American Sign Language. In Susan D. Fischer & Patricia Siple (eds.), Theoretical Issues in Sign Language Research, Vol. 1: Linguistics, 175–190. Chicago, IL: University of Chicago Press.

Meier, Richard P. & Diane Lillo-Martin. 2010. Does spatial make it special? On the grammar of pointing signs in American Sign Language. In Donna B. Gerdts, John C. Moore & Maria Polinsky (eds.), Hypothesis A/Hypothesis B: Linguistic explorations in honor of David M. Perlmutter, 345–360. Cambridge, MA: MIT Press.

Meir, Irit. 2002. A cross-modality perspective on verb agreement. Natural Language & Linguistic Theory 20. 413–450. DOI:  http://doi.org/10.1023/A:1015041113514

Mesch, Johanna. 2001. Tactile Sign Language: Turn taking and questions in signed conversations of Deafblind People. Hamburg: Signum Verlag Press.

Papagno, Costanza, Carlo Cecchetto, Alberto Pisoni & Nadia Bolognini. 2016. Deaf, blind or deaf-blind: Is touch enhanced. Experimental Brain Research 234. 627–636. DOI:  http://doi.org/10.1007/s00221-015-4488-1

Pawluk, Dianne, Ryo Kitada, Aneta Abramowicz, Cheryl Hamilton & Susan J. Lederman. 2011. Figure/ground segmentation via a haptic glance: Attributing initial finger contacts to objects or their supporting surfaces. IEEE Transactions on Haptics 4(1). 2–13. DOI:  http://doi.org/10.1109/TOH.2010.25

Pfau, Roland & Markus Steinbach. 2011. Grammaticalization in Sign Languages. In Heiko Narrog & Bernd Heine (eds.), The Oxford handbook of grammaticalization, 683–695. Oxford: Oxford University Press. DOI:  http://doi.org/10.1093/oxfordhb/9780199586783.013.0056

Quinto-Pozos, David. 2002. Deictic points in the visual-gestural and tactile-gestural modalities. In Richard P. Meier, Kearsy Cormier & David Quinto-Pozos (eds.), Modality and structure in signed and spoken languages, 442–467. Cambridge: Cambridge University Press. DOI:  http://doi.org/10.1017/CBO9780511486777.021

Sandler, Wendy. 2011. Prosody and syntax in sign language. Transactions of the Philological Society 108. 298–328. DOI:  http://doi.org/10.1111/j.1467-968X.2010.01242.x

Schlenker, Philippe. 2011. Donkey anaphora: The view from sign language (ASL and LSF). Linguistics and Philosophy 34. 341–395. DOI:  http://doi.org/10.1007/s10988-011-9098-1

Zeshan, Ulrike. 2011. Question particles in sign languages. In Matthew S. Dryer & Martin Haspelmath (eds.), The world atlas of language structures online. Munich: Max Planck Digital Library. http://wals.info/chapter/140.