“Do Sign Languages Have Accents?” Video collaboration between Department of Linguistics at Gallaudet and Mental Floss

The “Do Sign Languages Have Accents?” video was created by our department in collaboration with Mental Floss. Assistant Professor Julie Hochgesang worked closely with Arika Okrent, a regular Mental Floss contributor who also graduated from our program and has written great content like this, this, and this about signed languages. Along with being a part of the filmed content, doctoral student Wink filmed and edited the video content. Other members of the department also appear on the video: Nozomi Tomita, Ardavan Guity, Amelia Becker, Heather Hamilton, Casey Analco, Ariel Johnson, Anna Lim Franck, Paul Dudis, Larrisa Lichty.

Screenshot of a Facebook post by Mental Floss showing a video with a woman in mid-sign

Screenshot of the Facebook post by Mental Floss featuring the video

The video was originally posted on the Mental Floss Facebook account on September 7, 2017. We are sharing it again here via our YouTube account and blog.

We are hoping this is the start of many more videos in which we collaborate with Mental Floss. We are planning to cover lexical variation, the role of facial expressions (and other nonmanual signals) in signed languages, and depiction. What else would you like to see featured in future videos? Comment on our YouTube video or send us a tweet.

—-

Transcript of captions and video descriptions with time stamps:

(there is no audio track for this video)

00:00 The background is a green screen. Julie, a white woman with long brown hair, tattoos on her arms, a long necklace with a white stone on the bottom, is standing on the right. She is dressed in a black top and gray shirt.

00:01-00:06 Captions in yellow appear on the screen while the woman is signing, “Do sign languages, like American Sign Language (ASL), have accents?”

00:08-00:09 Julie is standing in the middle. “The answer is… of course!”

00:10-00:15 “But what does “accent” mean in a language that doesn’t use voice?”

00:16-00:20 Julie is standing on the left. “For vocal languages, accent is a distinctive way of speaking that SOUNDS different.”

00:21-00:27 “The way it sounds can indicate where a person comes from or what their language community is.”

00:28-00:30 “Sign languages are used in different places and communities too.”

00:31-00:37 “They have distinctive ways of signing that LOOK different. That’s what “accent” is for sign languages.”

00:38-00:41 Julie is standing in the middle. “Sign language accents can have to do with where you’re from.”

00:42-00:49 “For example, New Yorkers have a reputation for signing fast.”

00:49-00:57  The video cuts to different video footage in which Vance, a young white man (in his 20s) with glasses and beard and dressed in a dark gray top, is standing in a room with stools, tables and glass window walls are visible behind him. White text appears on the bottom left, “Deaf Comedian” By Vance Youngs https://youtu.be/rhVHiUrw55w There is no translation of his signed content (in which he talks about his mother who is from New York and has had influence on his own language) but it is evident he is signing fast.

00:58-01:01 The video cuts back to the green background with Julie standing in the middle. “Other aspects of your social identity can affect your accent in sign language.”

01:02-01:07 The woman is standing on the left. “For example, your age. Sign can look different depending on whether you’re older or younger.”

01:10-01:11 The video cuts to different video footage (which has been slowed down) in which an older black man with glasses and beard and dressed in a striped button shirt along with a vest is sitting in a room in which display cases and walls full of framed pictures are visible behind him. There is no translation of his signed content in which he fingerspells “prom” and his P is produced with all other fingers but the index finger extended with the thumb. Online text appears: “Austin – Our Community – Convo” by Convo Relay https://youtu.be/EpT9EvaEg4E”

01:12 The video cuts back to the green background with Julie standing on the left.

01:12-0:18 “What specific features identify this as an older accent? Notice the handshape “P”, usually pronounced like this.”

01:16 The video freezes while Julie demonstrates how “P” is typically produced by an ASL signer then again at 01:19 with all of the fingers except the index finger with the thumb (like the older man’s production). Julie points at the fingers that are extended with the thumb.

01:19-01:25 “He has three fingers down here. There may be a tendency for older people to pronounce that handshape like this.”

01:25-01:34 Julie is standing in the middle. “People from other countries who come here and learn ASL produce sign differently too. They have a “foreign accent”.”

01:34-01:40 Another woman appears on screen. Nozomi is Asian, has medium-length black hear and is wearing a purple shirt. She signs the ASL words for “BODY” and “PHYSICAL” which also appears on screen as yellow text while the video freezes on her production of both (in which she produces with an upward movement instead of down).

01:41-01:55 Julie appears again on the left side of the screen. “She’s from Japan. She uses Japanese Sign Language (JSL). She came here and learned ASL. Some of her ASL signs look different. Instead of making signs like BODY and PHYSICAL with the standard downward movement, hers move in the opposite direction.”

01:55-01:57 Another man appears on screen. Ardavan is middle eastern with black hair and beard. He is dressed in an olive green shirt. He signs “USA” (which appears on screen as yellow text) while the video freezes on his production in which he fingerspells to the left using his right hand.

01:58-02:14 Julie appears again on the right side of the screen. “He’s from Iran. He uses their sign language: Zaban Eshareh Irani (ZEI). He came here, learned ASL with some slight differences. He signed USA moving toward the body instead of the standard away from the body.”

02:15-02:22 Julie appears in the middle of the screen. “We’ve seen some examples of a sign language foreign accent. There’s also something we might call a “hearing accent”.”

02:22-02:33 Amelia appears on screen. She is white with brown hair pulled back and wearing glasses. She has a black and gray vertically striped shirt on. She signs (talking about how accent is perceivable) but there is no translation provided so that the viewer can focus on how she produces sign.

02:33-02:40 Julie, again on the right side of the screen, “How can we characterize a hearing accent? There are two noticeable features in that example. First, the rhythmic quality is different. Second is the arm posture and higher signing space.”

02:41-02:47 Julie is on the left. “The sign language community has so much rich variation. Now that you know what to look for, can you catch the difference in accents?”

In the next two minutes, a montage of other signers is shown along with Julie, Nozomi and Ardavan. Heather, a white woman with curly brown hair, a blue top and a moon stone necklace. Casey is a white man with a beard and long black hair pulled back, glasses and a yellow plaid button shirt. Ariel is a black woman with mid-length curly black hair, silver dangly earrings and a dark gray long-sleeve shirt. Anna is an Asian woman with long curly hair and red lipstick wearing a yellow and white striped shirt. Paul is a white man with long brown hair pulled back, a beard with some white hair, a vertically striped white shirt under a tan blazer. Larissa is a white woman with a brown scarf wrapped around her head, red stud earrings and a green shirt. Wink is a white man with short yellow hair, a short goatee, a blue button shirt and a gray blazer.

The montage of signers has people signing the same sentence (with slightly varying messages) in different ways in order to show how accent shows up in different ways. They are all fluent ASL signers.

02:48 Heather “(I’m Hearing)”

02:49 Casey “(I’m Deaf)”

02:50 Ariel “(I’m Hearing)”

02:52 Anna “(I’m Deaf)”

02:54 Paul “(I’m Deaf)”

02:56 Larissa “(I’m Hearing)”

02:57 Nozomi “(I’m Deaf)”

02:58 Wink “(I’m CODA [Child of Deaf Adults])”

03:00 Amelia “(I’m Hearing)”

03:01 Ardavan “(I’m Deaf from a Deaf family too)”

03:03 Julie “(I’m Deaf)”

03:05 Heather “(I was born in Philadelphia, live in DC)”

03:07 Casey  “(I was raised in Indiana)”

03:09 Ariel  “(I’m from Texas)”

03:11 Anna  “(I’m from Manila, Philippines)”

03:13 Paul  “(I’m from Michigan)”

03:15 Larissa  “(I’m from Pennsylvania)”

03:18 Nozomi “(I’m from Japan)”

03:20 Wink  “(I was born in Minnesota)”

03:21 Amelia  “(I’m from St. Louis, MO)”

03:24 Ardavan  “(I’m from Iran)”

03:26 Julie  “(I’m from Chicago and DC)”

03:28 Heather “(I use ASL)”

03:30 Casey  “(I sign ASL)”

03:32 Ariel  “(I sign ASL)”

03:34 Anna  “(I use ASL, Phillipines Sign Language and a little Japanese Sign Language)”

03:41 Paul  “(I know ASL and a little of some others. Italian Sign Language, Thai Sign Language … mostly ASL)”

03:57 Larissa  “(I sign ASL)”

04:00 Nozomi “(I know ASL, JSL, Hong Kong Sign Language)”

04:04 Wink  “(I sign ASL)”

04:07 Amelia  “(I sign ASL)”

04:08 Ardavan  “(I use ZEI and ASL)”

04:12 Julie  “(I sign ASL)”

At the end, white text shows up on a black screen.

“Narrated by Dr. Julie Hochgesang, assistant professor, Department of Linguistics, Gallaudet University”

“Produced by Dr. Julie Hochgesang, assistant professor, Department of Linguistics, Gallaudet University; Arika Okrent, Mental Floss; Wink, Department of Linguistics, Gallaudet University”

On another screen, “Thanks to: (in order of appearance) Dr. Julie Hochgesang, Vance Youngs, Nozomi Tomita, Ardavan Guity, Amelia Becker, Heather Hamilton, Casey Analco, Ariel Johnson, Anna Lim Franck, Dr. Paul Dudis, Larissa Lichty, Wink”

Advertisements
Posted in Linguistics, Research | Tagged , , , , , , ,

SLAAASh, ASL Signbank, ASL-LEX oh my! Interested in learning more? Join us Friday, Aug 18 from 1 to 6 (Gallaudet, SLCC 3rd floor open area)

The Sign Language Acquisition: Annotation, Archiving & Sharing (SLAAASh) project involves the construction of machine-readable annotated videos of Deaf children acquiring ASL interacting with their Deaf parents and/or researchers using ASL. The videos were collected in the 1990’s and are being re-annotated for consistency and accuracy. The annotations used in the SLAAASh project crucially involve the use of ID glosses, labels used to consistently identify a sign lemma regardless of changes in its use in various contexts. The ID glosses, along with videos, lexical, and phonological information about each sign, are housed in a new ASL Signbank, to be open to researchers, built on the basis of Signbanks previously constructed for other sign languages. The ASL Signbank is being constructed to be mutually compatible with the existing publicly available ASL-LEX database, containing overlapping phonological information as well as frequency and iconicity ratings, with a unique set of visualization options. Technological improvements to ELAN provide a bridge between an annotation file and Signbank, permitting a close integration of these components for improved consistency in data coding and further research. For example, all instances of a lemma can be called up, modified, or coded through the ELAN/ Signbank bridge. The SLAAASh infrastructure can also be used for other projects annotating ASL data.  (Follow us on Twitter @ASLSLAASH
On Friday August 18, there will be three presentations given in the open area of the linguistics department at Gallaudet University. These are open to the members of the department and others who are interested.  (Note if you are unable to make the entire afternoon, you are welcome to join for whatever presentation you are interested in)
1:30-2:30  “Overview of SLAAASh” by Diane Lillo-Martin
2:45 to 3:45 “ASL Signbank and linking to ELAN” by Julie Hochgesang
4:00 to 5:00 “ASL-LEX” by Zed Sehyr
5 to 6 “Lab” where more in-depth demonstrations of ASL-LEX or ASL Signbank could be provided or one-on-one assistance for getting linked to ASL Signbank, etc.
Interpreting will be provided. If close-vision or tactile interpreting is required, please email Julie (julie.hochgesang at gallaudet.edu) and Paul (paul.dudis at gallaudet.edu) as soon as possible.
Posted in Linguistics, Presentation, Research | Tagged , , , , , , , , , , ,

Dissertation defense – Casey Thornton “The status of palm orientation in the phonological representation of American Sign Language” 3/23/17, 2 pm LLRH6 101

Casey Thornton, a Ph.D. candidate in the Department of Linguistics, will defend her dissertation on “The status of palm orientation in the phonological representation of American Sign Language” on Thursday, March 23, at 2 p.m. in Living and Learning Residence Hall 6 (LLRH6) Room 101. The first forty minutes of the dissertation defense are open to the Gallaudet community.

Ms. Thornton’s dissertation examines the status of palm orientation in the phonological representation of signed languages through three unique but related studies using the Prosodic Model of sign language phonology as its theoretical foundation. The first study looks at how palm orientation behaves in natural signing, the second takes a psycholinguistic approach examining how native signers compensate when target joints responsible for orientation are restricted, and the third aims to determine if native signers are able to correctly identify signs modified to block orientation change. Results from the three combined studies indicated that, in line with the Prosodic Model, there are two types of palm orientation to be represented and how they function within signed languages are uniquely specified. This work contributes to the ever-growing sign linguistics field bridging the gaps between theoretical models and linguistic experimentation.

The members of Ms. Thornton’s dissertation committee are Dr. Gaurav Mathur, chair of the dissertation committee, Department of Linguistics; Dr. Deborah Chen Pichler, Department of Linguistics; Dr. Julie Hochgesang, Department of Linguistics; Dr. Daniel Koo, Department of Psychology; and Dr. Diane Brentari, Department of Linguistics, University of Chicago.

Ms. Thornton joined the Gallaudet University community in 2010, when she entered the masters program in linguistics. After earning completing her M.A. degree in 2012, she entered the doctoral program in linguistics. During her graduate studies, Casey has done extensive research on universal phonotactic constraints in signed languages and has taken a keen interest in bridging gaps between theoretical models of phonology and linguistic experimentation. She also worked as an adjunct professor at Gallaudet University and as a graduate assistant in the Brain and Language Laboratory for Neuroimagine (BL2). In 2015, Casey returned to her hometown and has been teaching at California State University, Northridge as an adjunct professor in Deaf Studies.

Posted in Linguistics, Presentation, Research, Students | Tagged , , ,

News: register now for FEAST 2017 June 21-22 in Reykjavík

Via SLLS

FEAST 2017 Local organizing committee

————————————————————————

Dear all.

Registration is now open for FEAST (Formal and Experimental Advances in
Sign Language Theory) in Reykjavík, June 21-22. Please consult the
conference website for further information:

https://sites.google.com/site/feastconference/home/conferences/feast_reykjavik_2017

We will post a preliminary programme on our website very soon.

If you plan to attend the conference, we strongly advise you to book
accommodation as soon as possible because Reykjavík has become a very
popular tourist destination in recent years. For further information on
booking accommodation see the conference homepage. Please note that Sunna
Guesthouse and Hótel Reykjavík Natura have reserved rooms for conference
guests until March 15th:
https://sites.google.com/site/feastconference/home/conferences/feast_reykjavik_2017/accomodation
Local organizing committee:
Jóhannes Gísli Jónsson
Kristín Lena Þorvaldsdóttir
Rannveig Sverrisdóttir
Þórhalla Guðmundsdóttir Beck
Scientific committee:
Chiara Branchini
Diane Brentari
Anna Cardinaletti
Carlo Cecchetto
Caterina Donati
Karen Emmorey
Carlo Geraci
Meltem Kelepir
Gaurav Mathur
Roland Pfau
Christian Rathman
Josep Quer
Markus Steinbach
Ronnie Wilbur
Bencie Woll

Posted in Conferences, Linguistics, Research

CFP: The 6th Meeting of Signed and Spoken Language Linguistics (SSLL2017) Dates: 22-24 September 2017 in Osaka, Japan

Conference Announcement via Keiko Sagara on SLLS list

Conference: The 6th Meeting of Signed and Spoken Language Linguistics
(SSLL2017)
Dates: 22-24 September 2017
Location: National Museum of Ethnology, Osaka, Japan
Organizers: HARA Daisuke (H), IIZUMI Naoko (H), IKEDA Masumi (D), Kikusawa Ritsuko (H, Chair), MATSUOKA Kazumi (H), SAGARA Keiko (D)
Website: http://www.r.minpaku.ac.jp/ritsuko/ssll2017/index.html

Contact: SSLL2017@minpaku.ac.jp
Abstract due: 31 March 2017

Description:
SSLL2017 will be held for the promotion of sign language linguistics, and also for a better understanding of human language by comparing and analyzing signed and spoken languages. English/Japanese, ASL/English, JSL/Japanese
interpretation will be provided. We invite presentations (30 minutes presentation, followed by 10 minutes questions and answers) on any topics related to sign languages linguistics and/or a comparison between signed and spoken languages.

Invited Presenters:
Lina LYNN YONG-SHI HOU (University of California, San Diego)

Past Events:
The 5th Meeting of Signed and Spoken Language Linguistics (SSLL2016)
http://www.r.minpaku.ac.jp/ritsuko/ssll2016/index.html

Posted in Conferences, Linguistics, Research

SHARE: Native or early signer (including CODAs)? 18 and older. Please take this online survey about ASL

Calling native and early signers (those who started signing before the age of 6) including CODAs 18 and older, you are invited to participate in a research study on American Sign Language as used by Deaf and signing communities in the United States. The survey is online, so you can do it in your own home and it should take no longer than 30 minutes. We are offering to reimburse $5 to participants who complete the survey.

To take the survey, click on the following link or copy and paste it into your browser: http://survey.az1.qualtrics.com/SE/?SID=SV_6x6PEyZ6ajnoRbT

UPDATE (3/2/17) – The survey can only be taken on a desktop computer. We are working on the possibility of taking this on a mobile device.

If you have any questions, you can contact Heather Hamilton via email at heather.hamilton@gallaudet.edu. You may also contact Dr. Julie Hochgesang via email at julie.hochgesang@gallaudet.edu.

This has been reviewed by the IRB committee at Gallaudet University.

Posted in Research, Students | Tagged , , ,

SHARE: Media piece on Pro-Tactile ASL in Quartz, “A language for the DeafBlind”

 

Quartz wrote about Pro-Tactile ASL, link below, featuring faculty member Terra Edwards, her colleague and a DeafBlind user of PT Oscar Serna, and PEN faculty Clifton Langdon.

DeafBlind Americans developed a language that doesn’t involve sight or sound

“Pro-tactile ASL borrows bits and pieces from ASL, adapting them to be useful for people who can’t see. Rather than having the using their own hands as a reference for communication, people who convey information with pro-tactile ASL use the perceiver’s hands and body. The speaker will touch the perceiver’s body and mover his or her hands; in doing so, the speaker takes advantage of the perceiver’s proprioception, or sense of where his or her limbs are. “When we’re talking about a particular shape, instead of showing the shape in space, you’d show [by moving] the perceiver’s arm,” said Serna.”

To read more and see video, visit the Quartz site using URL below.

In case not available in original post, here is a transcript of captions and video descriptions with time stamps for Quartz’s “A Language for the DeafBlind” (compiled by Clifton Langdon):
0:00-0:05 Clifton Langdon & Oscar Serna facing each other. Oscar signs using PTASL. Text with an arrow above Oscar appears: “Oscar Serna.”
CC: Oscar Serna is speaking in a brand new language
0:05-0;11 Text appears: “I’m really stressed out” with “stressed out” in bold.
Oscar: “I’m really stressed out!”
0:12-0:16 Oscar standing, directly facing the camera.
CC: Oscar is both deaf and blind, or, “DeafBlind.”
0:17-0:19 Oscar and Clifton walk outside.
CC: He works at Gallaudet University, on a project tracking the evolution of a language for those who can’t see or hear.
0:20-0:22 Text on screen shows animation of “Pro-Tactile ASL”
CC: This new language is called pro-tactile ASL. The ASL stands for American Sign Language, which uses visual signs for words and phrases.
0:23-0:29 Clifton appears on screen and signs visual ASL version of what Oscar said about being stressed.
CC: The ASL stands for American Sign Language, which uses visual signs for words and phrases.
0:30-0:35 Oscar uses PTASL to talk about a car accident
CC: Pro-tactile ASL communicates entirely through touch.
0:36-0:43 Clifton uses visual ASL. Text appears: “I cut down a tree.”
CC: For example, here’s a sentence in ASL: Clifton: “I cut down a tree.”
0:43-0:50 Oscar uses PTASL. Text appears: “I cut down a tree.”
Here’s how Oscar would say the same sentence in pro-tactile ASL: Oscar: “I cut down a tree.”
0:50-0:59 Three circles appear showing ASL, Fingerspelling and Braille with “ASL” “Fingerspelling” “Braille” written above each.
CC: Historically, DeafBlind people communicated through American Sign Language, Braille, and fingerspelling, where each letter of each word is signed into a person’s hand.
1:00-1:02 Helen Keller photo shown with circle drawn around her hand on another woman’s hand emphasizing how she communicated.
CC: Helen Keller, maybe the world’s most famous DeafBlind person, used fingerspelling.
1:02-1:14 close up shot of Oscar, Clifton and his PhD student, Lauren Berger using PTASL together
CC: But those are limiting, especially when DeafBlind people want to talk to each other.
CC: Pro-tactile ASL emerged in the early 2000s, as once-isolated DeafBlind people began to form communities.
1:14-1:19 Clifton, Oscar, and a CDI on screen. The CDI is interpreting to Oscar from person off screen.
CC: DeafBlind people have been adapting American Sign Language and adding gestures for things many languages don’t have words for.
1:19-1:23 slow-motion replay with circle drawn around Clifton’s hand on Oscar’s arm to emphasize that Clifton is tapping on Oscar’s hand.
CC: For example, this tap on the hand is like nodding.
1:23-1:28 Oscar and Clifton walk down the hall and chat.
CC: The language has been evolving ever since.
1:28-1:42 Clifton sits and signs using visual ASL. A title appears: “Clifton Langdon. Professor, Gallaudet University”
CC: Clifton: “Now what’s new in pro-tactile is that we’re seeing things that were used in visual sign language be transition from the use of space to the use of the perceiver’s body.”
1:43-1:48 Two circles appear. The first contains cartoon eyes. The second contains a cartoon ear.
CC: Traditional theories of language defined it as something seen or heard.
1:48-1:49 A third circle appears containing a cartoon hand.
CC: But Pro-tactile ASL proves that language can also be communicated through touch.
1:49-1:54 Oscar talks to Clifton and Lauren outside.
CC: And for the people speaking it, it allows for a life with richer communication.
1:55-2:07 Oscar talks to Clifton. A title appears: “Oscar Serna. Research assistant, Gallaudet University”
CC: Oscar: “Since I became pro-tactile, all of the channels have opened up; information flows freely.”
“It’s like going from dial-up to broadband.”
2:07-2:11 Fade to black with credit to reporters: Nushmia Khan & Katherine Foley

Posted in Faculty, Linguistics, Uncategorized | Tagged , ,