AI-Driven Multimodal Diagnosis of Developmental Speech and Language Disorders (DSLD)

Voice AI Multimodal Diagnosis Fairness

Overview

This project develops an AI-driven, multimodal diagnostic framework for developmental speech and language disorders (DSLD), enabling earlier, fairer, and more objective identification of communication difficulties in children.

What we do

We integrate voice-based acoustic and linguistic analyses with clinical and behavioral measures to build interpretable machine-learning models that can support clinical decision-making.

Key components

  • Voice-based acoustic and linguistic feature extraction
  • Multimodal data integration (speech, language, clinical variables)
  • Interpretable and fairness-aware modeling
  • Foundations for clinical decision-support tools