Presentation – Donna Jo Napoli presents “Reactive Effort as a Factor that Shapes Sign Language Lexicons” 2/4 12:30 @SAC1011

This Thursday (February 4th) at 12:30 in SAC 1011, Dr. Donna Jo Napoli will be talking with us about her research, which asks how linguistic patterns emerge from physical facts about the world. Abstract for her talk titled “Reactive Effort as a Factor that Shapes Sign Language Lexicons” is below.



Much has been written about the drive toward ease of articulation in spoken languages, but few studies consider sign languages.  In fact, though, the drive is more obvious in sign languages because the articulators are heavier and the source of articulation (the particular joints used in a given token) is uncontroversial.  In a study of ASL, the most frequent means of reducing effort was freezing joints (so fewer joints move), and the second most frequent was reducing the mass that is moved.  In sum, biomechanics are at play in language variation. However, biomechanics play a larger role.  We did a study of Italian Sign Language (LIS), Al-Sayyid Bedouin Sign Language, and Sri Lankan Sign Language. An examination of two-handed signs that are reflexively symmetrical across the midsagittal plane reveals that some of them induce torque (instability) while others don’t.  It turns out that the stable types are much more frequent across the lexicon than those that induce instability.  Thus biomechanics also influences the shape of the lexicon.

Posted in Linguistics, Presentation, Research | Tagged , , , ,

Brown Bag – Mark Sicoli presents “Place Reference and Cultural Practice in a Zapotec Community” 1/20 12-1250p @Library Basement, room 111

Mark Sicoli, a linguistic anthropologist from Georgetown University, will be coming to Gallaudet to share some of his recent work Wednesday, January 20th at noon in the library basement  (Room 111). Dr. Sicoli’s research is making important progress toward understanding iconicity in spoken languages, language and embodiment, language and interaction, bilingualism, and broader, socio-political processes, which affect and are affected by language. Tomorrow, he will be talking about cultural dimensions of place reference in a Zapotec community in Mexico. Please see below for abstract.

**The presentation will be given in English with ASL interpretation.

Place Reference and Cultural Practice in a Zapotec Community

Dr. Mark A Sicoli
Assistant Professor
Georgetown University


People make reference to places in the variable formulations afforded by their languages and bodies in interaction and to multiple ends that in addition to picking out referents simultaneously build conceptual common ground about seen and unseen landscapes, including moral stances about the social geography of people on those landscapes. This talk examines the different ways that Lachixío Zapotec speakers of Oaxaca, Mexico formulate and interpret place references in a corpus of video-recorded practical conversations. I describe resources of the Lachixío Zapotec language for referencing place and show how place references are entangled with person references and references to historical events and narratives. I examine place references as collaborative social actions that include both speakers’ place formulations and addressees’ responses that publically display their uptake and interpretations. Through examining references to locations in turn sequences situated within conversational story telling events we gather some evidence for how conceptual common ground is developed through the step-wise progression of turn-taking and how stances about places come to be culturally shared or contested dialogically.

Posted in Brown bag lunch presentations, Uncategorized | Tagged , ,

CFP: 7th Workshop on the Representation and Processing of Sign Languages: Corpus Mining (May 2016)

(due by Feb 6th, 2016)

Abstracts are invited for a full day workshop on sign language resources, to take place following the 2016 LREC conference on May 28th, 2016. Recent technological developments allow sign language researchers to create relatively large video corpora of sign language use that were unimaginable ten years ago. Several national projects are currently underway, and more are planned. This workshop aims to share experiences from current and past efforts. What are the problems that were encountered and the solutions created? What are the linguistic decisions taken? How have the data been analyzed?

The special focus of this workshop is on Corpus Mining. If one counts Big Data by the storage capacities needed, sign language corpora do qualify as Big Data. It is a different story, however, when you count by any linguistic means, such as tokens. But even then, many people working on sign language corpora have the feeling that there is much more in their data than they are able to squeeze out now that there is much more material than one person can have an intimate knowledge of. Thus, there is an increasing demand for methods to detect interesting data within sign language corpora. There are at least three dimensions to address:

  • traditional linguistic as well as statistical and machine learning approaches on the basis of hand-made annotation,
  • computer vision operating on the sign language video data, and,
  • in the case of translated material, language processing on the spoken language side identifying areas of interest in the original sign language.

We see the first applications drawing synergies from combining these methods.

The workshop will discuss methodologies, best-practice examples, linguistic data, and also applications of corpora within and beyond sign language linguistics. For sign language technologies, five areas will be in the focus:


  • Large-scale data visualization
  • Statistical analysis of corpus content
  • Integration of supervised and unsupervised machine learning into corpus environments
  • Sign language recognition (video image processing) leading to (semi-)automatic annotation
  • Synergies between analysis on the manually created annotation, computer vision, and mix-ins from spoken language technologies


It is expected that two out of four sessions will be devoted to the focus topics, whereas the other two will cover more general sign language corpus issues. So we invite abstracts for 20-minute papers or posters (with or without demonstrations) on the following topics:

Corpus Mining

  • Tagging to detect structure
  • Large-scale data visualization
  • Statistical analysis of corpus content
  • Integration of supervised and unsupervised machine learning into corpus environments
  • Sign language recognition (video image processing) leading to (semi-)automatic annotation
  • Synergies between analysis on the manually created annotation, computer vision, and mix-ins from spoken language technologies
  • User interface design to integrate new approaches into corpus linguistics workbenches that sign language researchers work with

General Issues on Sign Language Corpora and Tools

  • Experiences in building sign language corpora
  • Proposals for standards for linguistic annotation
  • Elicitation methodology appropriate for corpus collection
  • Proposals for standards for linguistic annotation or for metadata descriptions
  • Experiences from linguistic research using corpora
  • Use of (parallel) corpora and lexicons in translation studies and machine translation
  • Language documentation and long-term accessibility for sign language data
  • Video compression and streaming for sign language
  • Tool development
  • Linking corpora and lexicons
  • Integrated presentation of corpus and dictionary contents
  • Avatar technology as a tool in sign language corpora and corpus data feeding into advances in avatar technology

Papers (4-8 pages) of both oral/signed presentations and poster presentations of this workshop will be published as workshop proceedings published on the conference website.

Please submit your abstract through the LREC START system (link tbc) not later than Feb 6th, 2016, indicating whether you prefer an oral/signed or a poster presentation. In the latter case, please also indicate whether you plan to combine the poster with a demo.

When submitting a paper from the START page, authors will be asked to provide essential information about resources (in a broad sense, i.e. also technologies, standards, evaluation kits, etc.) that have been used for the work described in the paper or are a new result of your research. Moreover, ELRA encourages all LREC authors to share the described LRs (data, tools, services, etc.), to enable their reuse, replicability of experiments, including evaluation etc.

Posted in Uncategorized

Brown Bag – Danica Dicus presents “Towards Corpus-Based Sign Language Interpreting Studies…” 12/9 12-1250p @SLCC open area, Ling Dept

Please join us on Wednesday, December 9th at noon in the open area for a presentation by Danica Dicus titled, “Towards Corpus-Based Sign Language Interpreting Studies: A critical look at the relationship between linguistic data and software tools.” This presentation is one of three paper presentation requirements for PhD students in our department. See below for abstract, and we look forward to seeing you all there.

Towards Corpus-Based Sign Language Interpreting Studies: A critical look at the relationship between linguistic data and software tools

In any field of research the tools that are used to view and observe the given data will influence the outcomes of ones analysis. The use of software tools in the fields of spoken language interpreting corpora and sign language corpora has allowed for increased shareability and cross-linguistic analysis of data. The same benefits are available to a future sign language interpreting corpus, which would allow linguistic researchers, interpreter researchers, interpreter practitioners, and interpreter educators to have evidence based discussions about the data. The goal of this study will be to develop an understanding of what specific multi-media software tools offer linguistic researchers and how effectively patterns in American Sign Language (ASL)-English interpreted data can be brought to light. In order to explore the relationship between linguistic data and software tools, I intend to investigate the question: How effectively can four different multi-media software tools allow researchers to represent and describe linguistic aspects of Constructed Dialogue (CD) in ASL-English interpreted texts?

Posted in Uncategorized

Linguistics Department Dissertation Defense – Jeff Palmer “ASL Word Order Development in Bimodal Bilingual Children” 11/19/15 at 1 pm @LLRH6, room 101

Jeff Palmer’s Dissertation Defense will be Thursday November 19, 2015 at 1pm in the Living and Learning Residence Hall (LLRH6), Room 101. Everyone is invited to attend the public presentation which will be the first 30-40 minutes. Here is a summary of his dissertation:

ASL Word Order Development in Bimodal Bilingual Children: Early syntax of hearing and cochlear-implanted deaf children from deaf signing families
This study examines the word orders produced by heritage learners of American Sign Language (ASL) from video-recorded naturalistic sessions. These bimodal bilingual children are born to deaf signing parents but have auditory access to English. Commonly, these children are only exposed to ASL in the home and the dominant language, English, both in school and in the community. This dissertation tracks the production of canonical (SV and VO) and non-canonical (VS and OV) word orders of the subjects from ages 1;8 to 3;6 and compares them to deaf children (without cochlear implants) from deaf signing families. Word order development is assessed by a first-repeated use measure of acquisition, examining the amount of each of the four word order types produced, as well as the proportion of canonical and non-canonical word orders produced by session over time. Results reveal that the bimodal bilingual children develop canonical word order similarly to the deaf comparison group at 23 months. This suggests that the bimodal bilinguals set their spec-head and head-complement parameters very early. When combining both the ASL-only and code-blended utterances the overall amount of canonical word orders produced by the bimodal bilingual children is not significantly different. However, the children diverge from the deaf controls in terms of their overall use and acquisition of non-canonical word orders. A mixed effects two-way linear regression confirms an interaction between hearing status and non-canonical word order production. The deaf children produce significantly more OV utterances (β = -6.81; s.e. = 1.35; t = 5.03) and VS utterances (β = 5.32; s.e. = 1.35; t = 3.93) than the bimodal bilinguals. For OV word order all the bimodal bilinguals (n = 4) did not reach first-repeated use criterion by 42 months. For VS word order, the hearing bimodal bilinguals (n = 2) reached criterion more than one year after the deaf children while the cochlear-implanted deaf children (n = 2) never reached criterion. This suggests that the bimodal bilinguals are still acquiring the ASL morphological features associated with non-canonical word orders. The results of this dissertation offer some of the first quantitative evidence to support the notion that bimodal bilinguals are heritage learners of ASL by specifically identifying which areas of their grammars diverge from deaf controls. These findings support previous research that argues heritage learners have difficultly with morphology leading to word order issues. Importantly, these bimodal bilinguals were conservative in their use of non-canonical word order and had very few word order errors. This distinguishes them from reports about late-exposed deaf signers who have a much higher word order error rate and further supports their heritage status as heritage learners’ acquisition path often diverges from monolingual comparisons.

The department congratulates Jeff for making it this far in his dissertation studies.

Posted in Presentation, Students | Tagged , , , , , , ,

Linguistics Department Dissertation Defense – Christina Healy “Construing Affective Events in ASL” 11/11/15 at 12 pm @SLCC open area

We are delighted to announce that Christina Healy’s Dissertation Defense will be Wednesday November 11, 2015 at 12pm in the Linguistics Department Open Area (SLCC 3233). Everyone is invited to attend the public presentation which will be the first 30 – 40 minutes. Here is a summary of her dissertation:

Construing Affective Events in American Sign Language – Ms. Christina Healy

This study examined ASL constructions that denote affective events: those in which someone has an emotional response to a stimulus (e.g., in English, “The bear fascinated the girl”). Previous studies on this topic have focused predominately on spoken languages, and with Generative Linguistic analyses have centered on discussions of psychological verbs (“psych” predicates). In contrast, this dissertation used a Cognitive Linguistic approach for analysis and included both psych predicate constructions, as well as those which denote affective events through depiction, namely constructed action and constructed dialogue. A deeper understanding of ASL affective constructions can contribute to formal linguistic study, and findings may be applied in language curriculum development, interpreter training, and mental health counseling.

The department congratulates Christina for making it this far in her dissertation studies.

Posted in Uncategorized | Tagged , , , , , ,

Brown Bag – showing of “Ishaare” by Annelies Kusters 10/28 and 11/4, 12-1 @SLCC Open Space, Ling Dept

Please join us in the open space of the department the next two Wednesdays, 10/28 and 11/4 at noon for a special Brown Bag film event. We will be showing a recently released film titled “Ishaare,” directed by Anthropologist Annelies Kusters, and produced by the Max Planck Institute for the Study of Religious and Ethnic Diversity. The film is based on ethnographic research conducted by Annelies Kusters and Sujit Sahasrabudhe about language and gesture in Mumbai, India, and the ways in which local gestural repertoires are taken up and elaborated upon by Deaf, DeafBlind, and hearing people, as they circumvent language barriers, and differences in sensory access. We will be showing the film in a two-part series. The first half will be shown next Wednesday, 10/28, and the following Wednesday, we will be showing the second half. Dr. Kusters has also agreed to videorecord a short message about how this film speaks specifically to the concerns of sociolinguistics and sign language linguistics. The film will be followed by discussion. See below for synopsis.

The Director’s Synopsis

“Ishaare” has a double meaning: it means “gestures” in Hindi and Marathi, but it also means “signs”, as such indicating that there cannot be made a strict distinction between them. However, whilst there seems to be overlap between gestures and sign language, they differ too, as the protagonists of the movie show and tell us. The film “Ishaare” documents how six deaf signers communicate with familiar and unfamiliar hearing shopkeepers, street vendors, customers, waiters, ticket conductors and fellow travellers in Mumbai. Reena and Pradip, who is deaf blind, go grocery shopping along local streets, in markets and in shops. Sujit, our guide throughout the movie, communicates in public transport. Mahesh is a retail businessman who sells stocks of pens to stationery shops. Komal runs an accessory shop with her husband Sanjay, where most customers are schoolgirls. Durga is the manager of a branch of Café Coffee Day, an upmarket coffee chain. When enquiring, selling, bargaining and chitchatting, these deaf and hearing people use gestures and signs, and they also lipread, mouth, read and write in different spoken languages. In the film, they share how they experience these ways of communication.


In case you can’t join us, see the video here  The film lasts 80 minutes. (you can switch on or switch off HD)

The “making of”, which lasts 20 minutes:

Posted in Brown bag lunch presentations | Tagged , , , , , , , , ,