Tactile sign languages used by Deafblind signers are most often acquired by signers competent in a visual sign language who can no longer rely on the grammatical system of the visual language as it is, since some of its features are lost due to the loss of vision. A natural question is which repair strategies are adopted to compensate for the loss of the grammatical features of the visual language that can no longer be perceived. We argue that the transformation of LIS (Italian Sign Language) into tactile Italian Sign Language (LISt) is constrained by grammatical principles, rather than reflecting communication strategies that in principle might compensate for the visual loss equally well. Certain innovations are introduced to carry over the grammatical features of LIS to LISt. Even when LISt undergoes processes that make it diverge from LIS, these processes are attested in other natural languages. For example, among the innovations unconsciously introduced by LISt signers we found an instance of cross-modal grammaticalization. Our research suggests that tactile languages have the potential of becoming complete grammatical systems, at least when they build on previous knowledge of a visual sign language.
This paper focuses on tactile Italian Sign Language (LISt), the linguistic system used by the community of Italian Deafblind signers.
Tactile sign languages are usually parasitic on visual sign languages, in the sense that they are often used by individuals who already know a visual sign language before losing sight. This fact suggests a natural perspective to approach the study of tactile sign languages. The general question being asked is: how is the visual language reshaped by the transition to the tactile modality? We know that visual sign languages make grammatical use of non-manual markers (NMMs), such as facial expressions, eye gaze, body posturing and head movement. The information conveyed by these markers is lost in the transition from the visual to the tactile modality, since they cannot be perceived by the addressee (this also leads to their gradual disappearance in the Deafblind signer). Moreover, one salient feature of sign languages is the use of space to convey information: in signed discourse, signers articulate some signs in the neutral space (roughly, the space in front of the torso) and the regions of space in which these signs are articulated are relevant to establish reference (
When the transition to the tactile modality results in loss of information, we might expect tactile signers to make up for this loss by modifying some pre-existing manual items, or by introducing novel manual signs (or by combining some of these options).
These ways of reshaping the visual language are functionally motivated by the need to recover the information lost in the transition. Yet, as we will argue, the fact that tactile signers select a particular way of innovating, among others that are in principle available, is best explained not by the bare need to find effective ways of communication, but by an interaction of grammatical and perceptual constraints. For example, we will see cases where a communicative device that has been invented by interpreters because it seemed very effective is actually never used by Deafblind people, who use an alternative strategy which is not present in LIS, but is attested in other sign and spoken languages.
The main findings that emerge from our study are the following:
Whenever a LIS construction stops being perceivable in LISt but another LIS construction that can convey the same or a similar meaning is available, the latter is systematically employed (conditionals are a clear example).
When a specific meaning is conveyed by devices that are hard to detect haptically, LISt signers may invent a new lexical item to convey that meaning (for example, augmentative meaning is expressed by a sign which is not present in LIS and is the lexicalization of a gesture).
If a closed class item becomes hard to perceive, its form can be modified (this is the case of pointing signs with a pronominal function).
The newly introduced signs (and the modified signs) undergo the phonological processes familiar from LIS (assimilation being an example).
Whenever a LIS construction stops being perceivable in LISt, grammatical innovation may intervene (yes/no questions are a case in point).
Grammatical innovation follows paths well described in the diachronic syntax for spoken and sign languages, for example semantic bleaching.
A recurrent change motivated by perceptual factors is the need to replace simultaneity with sequentiality, as the simultaneous presentation of information is sometimes harder to detect in the tactile modality than it is in the visuospatial modality.
All these changes (creation of new lexical items, grammaticalization, phonological assimilation etc.) are processes familiar from the literature on spoken and sign languages, but occur in LISt because they make signs easier to detect in the tactile modality.
This paper is organized as follows. In Section 2, we provide some information about deafblindness in general and about the community of Deafblind people in Italy to which our informants belong. In Section 3, we explain how we collected the data. In Section 4, we focus on the differences between LIS and LISt. In particular, in Section 4.1, we show how the loss of visual information is compensated by maximizing some resources that are already present in the visual sign language. In Section 4.2, we show that, in order to make up for the loss of visual information, the LISt lexicon is sometimes enriched by incorporating signs that do not belong to the LIS lexicon. As we will see, these new signs undergo phonological process, like perseverative assimilation. In Section 4.3, we focus on pointing signs. In Section 4.4, we report an innovation in the way questions are formed, which we argue to be a case of cross-modal grammaticalization, where a lexical LIS sign becomes a purely functional category in LISt. Crucially, both the phonological processes of assimilation and the grammaticalization process are innovations spontaneously (and unconsciously) introduced by Deafblind signers, as we will show. In Section 5, we investigate to what extent LISt can be considered a language independent from LIS and in Section 6 we draw some general conclusions.
In this Section we offer some preliminary information on the Usher Syndrome, as this is the most common source of deafblindess, and we also explain how tactile communication works.
Usher Syndrome is a rare genetic disorder resulting in a combination of hearing loss and visual impairment; the vision loss results from retinitis pigmentosa, a degeneration of retinal cells that leads to early night blindness and the gradual loss of peripheral vision. Three subtypes of Usher Syndrome have been identified, but, as our informants suffer from Usher Syndrome of type I, we focus on this type. People with Usher Syndrome of type I are usually born deaf and lose their vision later on in life, typically showing the first visual symptoms in the first decade of their life. Moreover, they often have difficulties in maintaining their balance because of problems in the vestibular system.
Deafblind people use different languages and methods to communicate, that depend on what they learned or acquired during childhood (
Adapted visual sign language: since the disease involves a progressive reduction of the peripheral visual field with the outcome of a tunnel vision, the interlocutor must sign in a reduced signing space, between the upper part of the chest and the lower face and between the two shoulders.
Tracking method: the Deafblind person who still has residual vision holds the wrists of the interlocutor in order to maintain the signs within the visual field and receive information from the interlocutor’s movement. By this technique, the Deafblind person gets used to use sign language in a tactile mode. It is therefore considered a transition from the visual to the tactile reception of sign language.
Tactile sign languages, which will be the focus of this paper, are the final stage: Deafblind people, because of the visual impairment, adopt a full tactile mode both in production and in comprehension. As tactile sign languages require physical contact, communication takes place between no more than two signers at a time.
In addition to sign languages, deafblind people may have a different level of competence in the spoken language, which is still accessible in one of the following ways (see
Block alphabet: the interlocutor, with a finger, writes capitalized letters of the alphabet on the palm of the Deafblind person.
Malossi tactile alphabet: different letters of the alphabet are indicated by touching or pinching different points of the hand of the Deafblind person. This system is used only in Italy. In many other countries, Deafblind people use a similar alphabet called “Lorm”.
Fingerspelling: it is the manual alphabet used in sign languages to spell names or words of spoken languages that do not have a correspondent in the sign language.
Tadoma, sometimes referred to as “tactile lipreading”: deafblind persons feel the movement of the lips, as well as vibration of the vocal cords, by placing their hands on the mouth, jaw and cheeks of the interlocutor who speaks. As this method is not used by our informants, we do not describe it in further detail.
In this Section we give information about the informants who took part in this research and the elicitation methods we used to obtain the data.
Six Deafblind signers participated in our project. Five of them suffer from Usher Syndrome type I, that is they are deaf from birth and progressively lost their sight during adolescence. One is deafblind from birth for reasons other than the Usher Syndrome.
At the time of data collection (from 2007 to 2010), four of the five participants with Usher Syndrome were totally blind and one had some residual vision (which, however, did not enable him to visually perceive LIS). Four of them were over 50 and one was 39. All five started using LIS before age 6 and were proficient signers before they began losing their sight.
These participants came from different areas of Italy: two were from the North, two from Rome, and one was from the Center (Senigallia). Thus, they were exposed to different varieties of LIS before they became blind. They are autonomous in their everyday life, and regularly meet Deaf friends at local Deaf clubs. Two of them have a job. They are all active members of the Lega del Filo d’Oro, a non-profit organization that offers several programs for Deafblind persons. Being involved in these programs, some of them are members of “Comitato delle Persone Sordocieche”, a consulting body composed only of Deafblind people, whose main task is to contribute to the activities promoted by the Lega del Filo d’Oro. For instance, the decision to start the project on which our research is based was taken by this committee. So, the request to study LISt came from the very participants in this study, who felt that a scientific investigation of their “language” would favor its recognition. During the collection of the data, at the beginning of each task, a professional LISt interpreter would explain to the participants the specific purpose of the activity and, after this, the participants gave their consent to participate and be filmed.
The participant who is deafblind from birth has been first exposed to LISt directly around age 7. Before being exposed to LISt, she had been using a system of conventional domestic signs shared with members of her family from a very early age. At the time our study began, she was 21 and was attending school (later on, she obtained her high school diploma). She is from Perugia (Central Italy) and she is also an active member of the Lega del Filo d’Oro. She regularly interacts with other Deafblind signers.
The data from Deafblind participants with Usher Syndrome were collected on different occasions. First, our informants were videotaped over a whole week (October 15–19, 2007): they gathered at the local branch of the Lega del Filo d’Oro in Lesmo (a small town near Milan), and we held recording sessions both in Lesmo and at the University of Milan-Bicocca. Then, we had sessions with the same informants two years later in Loreto (May 8–9, 2010). The participant who is deafblind from birth was videotaped in Milan over a two-day period (June 14–15, 2010). Finally, there was one more session in Milan in September 2011. We obtained about 35 hours of recordings. Each videotaping session involved a pair of signers. Each exchange was filmed by four cameras. One camera focused on one signer in the pair, another camera focused on the other signer, a third camera focused on both signers simultaneously, and a fourth camera focused on the active hand, which alternates between the two signers. The Figures
Focus on single signer.
Focus on single signer.
Focus on both signers.
Focus on active hand.
Only a fraction of the recordings (about 10 hours) has been analyzed up to now and a smaller portion has been annotated, because the process of annotating the videos is extremely time consuming, since at times all the videos from the four cameras must be used as resources to reconstruct a sentence. As a consequence, to analyze one minute of the exchange may take up to one hour. Furthermore, this work can be done only by the very few people who are both professional LISt interpreters and trained annotators.
Free conversation in a natural environment is a rich source of linguistic material, but, due to time constraints (the participants were coming from different parts of the country and would gather together only for few days), it was unlikely that a corpus collected in this way could adequately cover all aspects of the language we wanted to investigate. Thus, while we did make use of free conversations to collect data, we also developed some alternative strategies to elicit a sizable number of tokens for the grammatical constructions we were interested in (typically, constructions that we expect to be more affected by the transition from visual to tactile modality).
Procedures in which a LISt signer is asked to give grammaticality judgments are not suitable to elicit data. This is due to the fact that Deafblind signers use the tactile variety only with other Deafblind signers: if the addressee is not blind (as is the case of interpreters), Deafblind participants tend to use the standard variety of LIS, relying on the tactile modality only when receiving information. Other standard procedures for assessing linguistic knowledge are also unhelpful: tasks like matching the picture that corresponds to a sentence or eliciting sentences to describe visually presented situations are obviously useless with Deafblind persons.
Depending on the specific aspect we wanted to investigate, we adopted different procedures to elicit data. Some of them adapt standard strategies to the tactile modality. For instance, instead of using pictures to describe situations, we presented situations by using toy props and let our informants explore them with the hands, as shown in Figure
Exploring toy props.
In order to elicit specific constructions, we made our informants play games. We planned sessions to elicit polar (yes/no) and
To elicit yes/no-questions, we used a modified version of the “twenty questions game” (without imposing a twenty questions limit). In a typical instance of the game, one signer chooses an animal and the other signer must guess the animal by asking questions whose answers can be either “yes” or “no” (e.g., ‘Is it big?’ ‘Can it swim?’, etc.).
In the task for
In order to elicit conditionals, we asked one Deafblind signer to describe to another Deafblind signer how to play a certain game (like chess). The rules of a game are naturally described by distinguishing different hypothetical cases and by stating what is to be done in each case: if this happens, then one can do this, if that happens, then…, etc.
In order to elicit adverbs, we adopted a modified form of the “telephone game”. Like in the English version, the point of this game is to preserve the original as much as possible both in terms of the content that is conveyed and in terms of the form used to convey it. We were interested in lack of preservation (violations of the rule of the game) which could possibly be motivated by the use of LISt. More specifically, the author of this paper who is a sighted and hearing native signer of LIS (namely, a hearing person raised by Deaf signing parents) signed to a Deafblind participant a story we created on purpose that contained many manner adverbs and degree modifiers. In LIS these are most often articulated by altering the movement component of verbs (sometimes, non-manual markings are used), therefore they can be analyzed as morphemes incorporated in the verb, rather than as independent signs. In the story signed to the Deafblind participant, most adverbs were expressed in “the LIS way”, namely by simultaneous means (by altering the movement component of verbs). The Deafblind person who received the story had to sign it to another Deafblind signer. This one signed it to a third Deafblind signer who in turn signed the same story to a fourth one. The last Deafblind signer to receive the story had to repeat it to the first Deafblind person, who had to identify the mistakes that had been made during the passages. The purpose of this task was to check whether the Deafblind signers would continue to use the simultaneous construction to express adverbial modification or whether they would prefer a sequential construction.
To sum up, we collected data both from free and elicited conversations. The elicited conversations were obtained by playing games, some of which involved the use of props to present scenes in a tactile modality.
In this Section, we describe the main changes from LIS to LISt that we have identified based on the analysis of a sample of the collected data, corresponding to about 10 hours of video recording. We start from the less surprising changes, which involve a very productive use of strategies that are attested, although used less often, in the visual language (Section 4.1). In Sections 4.2–4, we turn to genuine linguistic innovations, where LISt shows properties unattested in LIS. In all these cases, we argue that the process of transformation from LIS to LISt is grammatically governed.
As we pointed out in Section 1, one natural question which arises in investigating a tactile sign language is how Deafblind signers make up for the loss of information resulting from the fact that non-manual markers (NMMs) can no longer be perceived. Interestingly, only the signer who had residual vision occasionally used NMMs, the others did not use NMMs anymore. One natural expectation is that, whenever a manual sign is available that provides an alternative way to convey the information conveyed by an NMM, it will be used in place of the NMM. This is what we observed.
In LIS, the main device to signal a conditional consists in raising the eyebrows while the antecedent clause is signed manually. Thus, a conditional sentence like (1) is translated in LIS as in (2):
(1) | If it rains, I go out. |
(2)
However, in LIS there is also a manual sign for “if”, which may co-occur with the conditional NMM (IF consists of a G handshape, closed hand with forefinger extended, signed closed to the forehead, with the palm initially facing left, when the right hand is used, and the forefinger being moved to the right while rotating the wrist):
(3)
a.
Moreover, in addition to
We were able to elicit many instances of hypothetical discourse in LISt by asking our participants to explain to each other the rules of different games (chess, card games, etc.). All hypotheticals were introduced by one of the manual signs mentioned above (all of them were used). Here is an example:
b. | ||
‘If I take the king is all over, you lose, I win.’ |
Arguably, these manual signs do not acquire any new grammatical function, but carry over to the tactile language the same grammatical functions they have in the visual language. Similar facts are reported by Collins (
In LIS manner adverbs and degree modifiers are often articulated by altering the movement component of verbs. For example, in order to translate sentence (4) in LIS, the manual sign
(4) | Gianni eats fast. |
(5) | |
(6) | |
In LISt the preference pattern is reversed: when a separate sign for the manner adverb is available, our Deafblind informants tend to use a sequential construction in order to express manner modification. Thus, in our LISt corpus we normally find occurrences like (7), which in LIS would be more commonly expressed by modification of the verb movement:
(7) | a. | |
‘The sun beats hard.’ | ||
b. | ||
‘The temperature was very hot.’ | ||
c. | ||
‘(He) was very tired.’ | ||
d. | ||
‘It was very hot.’ |
For example, while the adverbial modification in LISt sentence (7a) is expressed by the separate manual sign
Since a difference (e.g. in speed or intensity) between various kinds of movement of the hands can in principle be perceived in the tactile modality, it is not immediately obvious why LISt users should disfavor the strategy consisting in altering the movement component of verbs to express manner adverbs and degree modifiers. However, this fact can be explained by looking at some further facts concerning adverbial modification in the visual language. Consider how English sentences (8)–(9) can be expressed in LIS:
(8) | Gianni cut the onion. |
(9) | Gianni cut the onion finely. |
As shown in Figure
Adverbial modification by NMM.
So far, the differences we have observed between LIS and LISt reflect the strategy of maximally exploiting the resources that are already present in LIS to avoid loss of information. The changes we observed may be entirely explained in terms of adaptive choices that systematically allow for an effective way of communicating, while no genuine linguistic innovation has been observed yet.
In some cases, adverbial modification involves the introduction of new items in the lexicon of LISt. One common gesture in Italian culture is a horizontal B handshape with the thumb up, the palm facing the signer, moving down and up repeatedly with a rotation of the wrist, as shown in Figure
The emblem
Although this gesture is occasionally found in visual LIS, our LIS informants perceive it as not being part of the lexicon of LIS, but as a gesture borrowed from the spoken culture. In LISt,
(10) | a. | |
‘Very beautiful.’ | ||
b. | ||
‘(He) was very thirsty.’ | ||
c. | ||
‘The sun was very hot.’ |
Crucially, in LISt, we observed a way of expressing augmentative meaning which we do not observe in LIS. For example, in (11), illustrated in Figure
(11) | |
‘A lot of water.’ |
In (11), the handshape of the sign
(12) | |
‘The isle was very beautiful.’ |
Sentence (12) can be analyzed as another case of perseverative assimilation, since the hand configuration in the item glossed as
Only one of the signers who took part in the telephone game did not use the emblem
(13) | |
‘My son was very happy.’ |
One thing that needs to be explained is why
As the lexicalization of
In this Section we have argued that LISt users introduced a new manual sign in the lexicon, corresponding to an augmentative adverb. This allows information that is transmitted simultaneously in LIS to be transmitted sequentially in LISt. This transition is arguably motivated by perceptual constraints, as processing of simultaneous information is easier in the audiovisual channel than in the haptic one. Importantly, the new innovation interacts with phonology.
In the next Section we turn to a difference between LIS and LISt which involves a functional sign.
Another area in which we studied the differences between LIS and LISt is the production of pointing signs.
In LIS, as in other visual sign languages, NPs are associated with locations in space, commonly called ‘(Referential) loci’. Either the NP is directly signed in the locus or, if this is not possible (for example because the noun is signed on the body of the signer), the association between the NP and the locus is done by pointing to, or directing the gaze towards, a specific point in space, which becomes the locus of the NP. If the referent of that NP is present in the utterance context, the pointing is towards its actual location. If the referent is not present, it is assigned a point in the neutral space. Each NP can be assigned a distinct location, and in principle each location can uniquely identify a referent. The point in the neutral space to which the index finger is pointing is relevant for anaphoric purposes, as the pointing sign may be construed with an NP that was previously signed in that point. Lillo-Martin and Klima (
Whether pointing signs should be assimilated to pronouns remains a controversial issue. Clearly, they serve a pronominal function, both anaphorically and deictically. Furthermore, in principle the three-way distinction between first, second, and third person pronouns might be extended to pointing signs, because the index finger in the direction of the signer indicates first person, the index finger in the direction of the addressee indicates second person, and the index finger in the direction of a point different from signer and addressee might be taken to express the grammatical category of “third person”. However, there are non-trivial differences between pointing signs and pronouns in spoken languages (see
The production of pointing signs by LISt signers is an obvious area in which variation in LIS is expected, since finger pointing gestures are not observed in congenitally blind children (or, at any rate, they are extremely rare) and these children use other kinds of deictic gestures like palm pointing (cf.
Whether Deafblind signers avoid finger pointing signs to refer to a third person for similar reasons needs to be investigated. Surely, they cannot produce or perceive eye-gaze, but they might recover information about the locus based on the location and orientation of the hand, so the use of pointing signs might still be informative, although the relevant information might be harder to obtain than in presence of eye-gaze.
The only existing study on this topic is Quinto-Pozos (
Our study shows a more nuanced picture, as LISt signers do produce pointing signs to refer to non-first/second person referents. However, these signs differ in two respects from the way they are articulated in LIS:
the hand configuration with the index finger is often substituted by a B, Ḃ, a bent B or a 5 configuration, as shown in Figure B, Ḃ, bent B and 5 configuration.
often, the hand does not Reference to the signer (“First person pronoun”). Reference to the addressee (“Second person pronoun”). Reference to a person who is not present (“Third person pronoun”).
How are these differences between LIS and LISt related to the transition from the visual language to the tactile language? Two explanations can be adopted here. The first explanation is fully linguistic, while the second builds on what we know about haptic perception.
In nutshell, the linguistic explanation is that LISt signers move their hand to a locus (instead of pointing to it) in order to meet the grammatical condition that NPs should be able to be overtly assigned an index in a situation in which pointing becomes more difficult. If this hypothesis is right, this tells us something about the lively debate about whether the traditional first, second and third person distinction can be extended to pointing signs. If pointing signs were merely devices to introduce person distinctions, there would be an easy and efficient way to do so: palm pointing gestures, as we saw, are used by congenitally blind children. So, in principle, first, second and third person could be marked by using the direction of palm pointing: palm in the direction of the signer for first person, in the direction of the addressee for second person, and in any other direction for third person. However, Deafblind LISt signers choose to produce pointing signs by moving the hand to different points of the signing space. It seems plausible that they choose to do so because, at an abstract level, this is precisely how the pronominal system of LIS, their visual sign language, works: in the LIS pronominal system, pronouns are contrasted by being associated with different points of the signing space, and the changes LISt signers introduce with respect to pronouns are aimed at preserving this feature of their pronominal system. Another way to put it is that the pronominal system of LIS, and in general of visual sign languages, does more than (or perhaps something different from) introducing a distinction between first, second and third person: it marks the difference among speaker, addressee and other referents, but, at the same time, it provides a way of marking coreference.
A second way to make sense of the modifications in the production of pointing signs by Deafblind signers stems from studies about haptic perception. A preliminary caveat is necessary though. Studies of haptic perception by Deafblind signers are exceedingly rare, and hypotheses emerging from studies of haptic perception in the general population or even in blind individuals can be extended to deafblind signers only tentatively, as the extensive use of a tactile language might influence haptic perception (cf.
One possibility we would like to suggest is that these factors we considered as driving the modifications of pointing signs by Deafblind signers are both at work, namely that the innovative use of pointing signs by LISt signers is the result of a complex interplay of perceptual factors (a locus is haptically easier to detect if the hand moves there than if it points to it) as well as grammatical factors (the need to respect the requirement that indices be overtly expressed).
In the next Section we switch to a further case of linguistic innovation: the change from LIS to LISt that may be compared to processes that in spoken and in sign languages go under the label “grammaticalization”.
In this Section we will show that an interrogative sign (
We start by introducing some background information on question formation in LIS. Polar questions (yes/no-questions) are distinguished from affirmative sentences in LIS (as in many other sign languages) only by an NMM which consists mainly in raised eyebrows:
(14) | |
‘Gianni called.’ |
(15)
As for
(16)
(17)
Structures like (18), in which a
(18) | a. | |
* |
||
b. | |
|
* |
Let’s now turn to LISt. In the data we collected, we found three types of questions:
1
Collins & Petronio (
1
The use of 1
the “canonical use” of
the redundant use of
the use of
the use of
Below are some LISt examples illustrating each use (we follow the convention of using “/” to indicate the occurrence of a pause, the material in parentheses in (20) indicates the discourse preceding the example, finally we use the superscript “gesture” to indicate that what is being glossed is not a sign of LIS but a gesture):
(19) | Canonical use in |
‘One (thing) is still missing, what is the second one?’ |
(20) | Redundant use in |
( |
|
‘Which animal is nice?’ |
(21) | Alternative question use in |
‘Is it small or big? Small or big…’ |
(22) | Polar question use in |
‘Did your mother sign?’ |
In all these examples,
Concerning the use of
(23) | |
[ |
|
‘Your mother signed, right?’ |
If this were the correct analysis, however, we should expect some prosodic cue to signal that
The use of
(24) | |
‘Do you have a skirt on?’ |
(25) | |
‘Do you have a car?’ |
Thus, our conclusion is that
It is important to realize that use of
(26) | a. | Declarative in |
Sei felice. | ||
‘You are happy.’ | ||
b. | Question in |
|
‘Are you happy?’ | ||
c. | You are happy. | |
d. | Are you happy? |
In other languages, polar questions are derived from declarative sentences by adding an interrogative particle. An example of this strategy is provided by Tzotzil, a Mayan language spoken in Southeastern Mexico which is discussed by Konig & Siemund (
In many languages, interrogative particles occur in
(27)
Naoya-ga
Noaya-
nani-o
what-
nomiya-de
bar-
nonda no?
drank
‘What did Naoya drink at the bar?’
In view of these observations, it is easier to make sense, from a linguistic standpoint, of the overuse of
Two further observations are in order. First, it should be stressed that the use of
The second observation, already anticipated above, is that, when
A natural question is whether the innovations we are talking about are the result of an explicit decision by the community of Deafblind signers or have been unconsciously developed by the members of this community through spontaneous interaction. We can answer this question. The members of this research team reported early finding about the use of
We mentioned at the outset that tactile sign languages are no natural languages in the ordinary sense. They virtually have no native signers and they are most often acquired by signers competent in a visual sign language who can no longer rely on the grammatical system of the visual language as it is, since some of its features are no longer perceivable due to the loss of vision.
One issue these observations raise (pointed out to us by an anonymous reviewer) is whether it is appropriate to describe them as distinct languages. For example, nobody would accept that a new (visual?) language is created if certain aspects of articulation of a spoken language are exaggerated to make them more visible to a lip reader. Similarly, nobody would say that English is a distinct language when English words are whispered or shouted from a far distance. Finally, we would not regard Malossi, the communication system which spells different letters of the alphabet by touching or pinching different points of the hand, as a new tactile language (it is a writing system). How is LISt different from these cases? And, even more centrally, can a fully-fledged language exist in the tactile modality?
The observation that signing in LISt is more fatiguing than visual signing might be construed as evidence that LISt is not optimized for the tactile modality and in this sense is not a fully developed tactile language. Indeed, in view of what we know about LISt, it may very well be the case that the transition to a tactile language in the full sense is still in development. Yet there are some clues indicating that a transition to a fully developed tactile language is in progress. First, we could observe that LISt is naturally used in lengthy conversations on a variety of arguments, it is used to transmit information, to discuss, to joke, namely it is used for all the purposes a fully developed natural language is used. Notice, moreover that one of the Deafblind participants in the project, namely the participant who is deafblind from birth, acquired LISt directly as a tactile system (though she was not exposed to it from birth) and this became her primary mode of non-written communication, which she uses for everyday needs and for exchanges with other Deafblind persons. While this is no conclusive evidence that LISt is a tactile language in the full sense, it is however an indication that LISt is a natural mode of expression for Deafblind people, whether or not they were previously competent in a visual sign language, and this is a feature that a full tactile language should have. Finally, although LISt may not be fully optimized for the tactile modality, there are several indications that a process in this direction is taking place. For instance, the fact that LISt signers reduce the signing space to minimize fatigue is an indication that a process toward optimization is in progress. Generally speaking, the repair strategies we investigated in the transition from LIS to LISt, including the grammatical innovations introduced by LISt signers, may be seen as part of this process. This is where the parallel with shouting or whispering, exaggerated mouthing and Malossi breaks down: these uses do not involve the type of grammatical innovations we described for LISt, which include syntactic changes (in interrogatives) and phonological ones (in the innovative production of pointing signs in LISt, with changes at the segmental level which involve at least two formational parameters).
So, our conclusion concerning the status of LISt is that at the moment it is moving toward a full tactile language. Whether this transition will be successful is a separate issue, which also depends on sociolinguistic factors such as the number of people that will be using LISt in the future, to which extent they will form a cohesive community, and so on. What we could observe is that LISt is striving toward that goal and that there are indications that a full transition is possible in principle, namely a full sign language in the tactile modality can emerge.
Our study focused on the strategies that, in switching to the tactile modality, Deafblind signers adopt to compensate for those grammatical features of the visual sign language that can no longer be perceived.
In principle, one possible choice when using the tactile language is simply to avoid those grammatical constructions of the visual language that make use of markings that can only be perceived visually and exploit the manual resources of the visual sign language by using constructions that are equivalent for communicative purposes. As we saw, the strategy of “replacing” constructions that make use of visual NMMs with other constructions that don’t but are functionally equivalent is definitely one strategy that Deafblind signers use (as the production of conditionals, adverbs and questions in LISt shows). However we also saw that LISt Deafblind signers
However, other examples of innovation seem to respect intrinsic properties of sign languages, which are therefore maintained in the transition to the tactile modality. In the case of questions, the use of
All in all, in some cases the change we observe in LISt makes it more similar to spoken languages than LIS is (sequentiality). In other cases, the particular direction the change takes is constrained by the need to hold on to the grammatical properties of LIS in the mutated condition (
In conclusion, our findings support the view that the language instinct is fundamentally a-modal. Fifty years of research on sign languages have shown that full languages can develop in the visuo-spatial modality. Our study of LISt suggests that language has the potential to fully develop in the tactile modality as well, at least when this development builds on previous knowledge of a visual sign language.
We adopt the convention of using “Deafblind” (with capital “D”) to refer to individuals who are deaf and blind and use tactile sign language as their primary mean of communication. We use “deafblind” to refer to individuals who are deaf and blind.
Edwards (
We briefly mention two further elicitation tasks we ran, targeting negative structures and classifiers handshapes (see
We adopt the standard conventions of glossing signs with words in capital letters. The non-manual markings (NMMs) are indicated by a line over the sign glosses with some articulatory information. Subscripts 1, 2, 3 indicate first, second, and third person inflections. Hyphen is used to indicate signs which require two or more words in the English glosses. The “+” symbol indicates repetition.
Something similar occurs in LIS also with intensifying adverbs like “strong”, which may be expressed by using a facial expression:
(i)
‘The sun beats hard.’
Collins (
F handshape looks like follows:
See also Lillo-Martin (
See also Butterworth (
We stick to Quinto-Pozos’s terminology, who defines a third person pronoun as “the use of a point to the left or to the right of the signing space to establish/indicate an arbitrary location in space that is linked to a human referent who is not physically present”.
Based on Cecchetto et al. (
This use of
The sign
The pictures of Deafblind signers are published with their consensus and other persons depicted in the pictures also give their consensus. The collection of the data was part of an agreement between University of Milan-Bicocca and Lega del Filo d’Oro.
We thank the Lega del Filo D’Oro for supporting our research. We also thank our informants Francesco Ardizzino, Maria Costanza Bacianini, Maurizio Casagrande, Pino Gargano, Amerigo Iannola, Alessandro Romano. Finally, we thank Maria Teresa Guasti for helping us to design the elicitation methods. This paper has been possible thanks to the SIGN-HUB project, which has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 693349.
The authors have no competing interests to declare.