Developing a system for automatic co-speech gesture recognition using whatever I can get my hands on (and some unrelated side quests). Project developed within the Distributed Little Red Hen Lab at Case Western Reserve University and MULTIDATA. Contact: kxc750@case.edu Github: https://github.com/kate-carter
Posts
-
Project Hiatus until 8/10/2025
-
ISGS10
-
Gemini API (gemini-2.5-flash-preview-05-20) for Co-Speech Gesture Annotation
-
Things I’ve drawn while waiting on ML training/test runs
-
Gemini API (gemini-2.0-flash-001 multimodal) for Co-Speech Gesture Annotation
-
Initial Testing with ChatGPT for Video Classification of Co-Speech Gesture
-
TubeViT on the CWRU HPC (Pioneer Cluster)
-
Resume/CV
-
Intersections Spring 2025
-
Putting Number One Second: Dissecting the Motivations of Pathological Altruists
subscribe via RSS