Abstract |
As in oral phonology, prosody is an important carrier of linguistic information
in sign languages. One of the most prominent ways this reveals itself is in the
time structure of signs: their rhythm and intensity of articulation. To be able
to empirically see these effects, the velocity of the hands can be computed
throughout the execution of a sign. In this article, we propose a method for
extracting this information from unlabeled videos of sign language, exploiting
CoTracker, a recent advancement in computer vision which can track every point
in a video without the need of any calibration or fine-tuning. The dominant hand
is identified via clustering of the computed point velocities, and its dynamic
profile plotted to make apparent the prosodic structure of signing. We apply our
method to different datasets and sign languages, and perform a preliminary
visual exploration of results. This exploration supports the usefulness of our
methodology for linguistic analysis, though issues to be tackled remain, such as
bi-manual signs and a formal and numerical evaluation of accuracy. Nonetheless,
the absence of any preprocessing requirements may make it useful for other
researchers and datasets.
|
Citation |
@inproceedings{sevilla_prosody_2024,
title = "Automated Extraction of Prosodic Structure from Unannotated Sign Language Video",
author = "Sevilla, Antonio F. G. and Lahoz-Bengoechea, Jos{\'e} Mar{\'\i}a and Diaz, Alberto",
editor = "Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen",
booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
pages = "1808--1816",
url = "https://aclanthology.org/2024.lrec-main.161",
}
|