For the first day of our Russian partners’ U.S. visit, we took a tour of the CSD headquarters where they got the chance to meet our majority-deaf admin staff, and then settled in the Benjamin J. Soukup conference room to begin a workshop by our Visual Language and Visual Learning Center (VL2) partners from Gallaudet University, Dr. Melissa Herzig and Melissa Malzkuhn. With the support of enthusiastic interpreters, we proceeded for a full day of information and knowledge exchange via four languages—American Sign Language (ASL), Russian Sign Language (RSL), English, and Russian.
Citing research from Dr. Laura Ann Petitto, neuroscientist and scientific director of VL2, Dr. Herzig explained that visual phonology (ASL) and sound phonology (spoken English) activate the identical brain tissue, meaning that the brain acquires language through patterns, which can be found in both signed and spoken languages.
For deaf and hearing children alike, early language exposure plays a crucial role in language development, leading to better eye gaze and joint attention, stronger vocabulary, and literacy development. Milestones for a child to acquire language, marking the appropriate age for children to begin babbling and express certain amounts of words, were found to be the same for ASL and spoken English. Contrary to popular belief, early bilingual exposure does not hinder the development of speech, and Dr. Herzig emphasized the importance of exposure to sign language at an early age. In one study, deaf signers who acquired ASL early were able to read complex English sentences more quickly and respond to associated questions more accurately than those who acquired ASL later in life.
Dr. Herzig also presented various approaches in using sign language to improve literacy skills and bridging both languages, ASL and English. Studies show fingerspelling skills positively correlate with stronger reading skills. Fingerspelling, reading, and writing are interrelated, and early exposure to fingerspelling helps children become better readers. To help build the connections between signed words and fingerspelled words, one can point at an object, a person and printed words and then fingerspelling its name. This will support a child’s literacy development.
The VL2 Storybook App was built on this wealth of research knowledge. It has three modes: Watch, Read, and Learn. In the Watch mode, the entire story is presented via ASL. In Read mode children can watch ASL videos and read English, for a self-directed reading experience supplemented by visuals—if a child does not know a certain word, they can touch that word and a video in a box pops up, signing and fingerspelling that word. And then in the Learn mode, children build up their vocabulary through a glossary of words that are presented via chaining method, in which the word is signed, fingerspelled, and then signed again.
To wrap up the day, we began learning how to use VL2’s Creator program, which provides a convenient platform to create new bilingual and visual storybooks. VL2’s Storybook library currently includes Norwegian and Japanese books in addition to ASL books, and we can’t wait to add Russian books to their virtual shelf. The Creator program looked complicated with its lines of code, but Melissa Malzkuhn showed us how we could alter it to create a customized book of our own.
“Combining visual stories and the touch screen tablet—a revolutionary learning tool— we can make magic,” Malzkuhn said.
With day one wrapped up, we look forward to learning more about the Creator app and start filming our Russian signer in the studio in day two!