Authors: Yakira Mirabito, Xiaowen Liu, Kosa Goucher-Lambert
Design rationales are the explicit justifications behind design decisions, yet they vary widely in structure and depth, making their quality difficult to assess at scale. This paper addresses this challenge by developing a data-driven approach that combines expert human judgments with computational linguistics to evaluate design rationale quality. Using a dataset of 2250 rationales generated across different formats, the study identifies language patterns that predict expert evaluations and builds interpretable models that automate quality assessment across five dimensions. The work also empirically validates the Feature-Specification-Evidence (FSE) framework as a structured approach that improves the communication of design rationale. Together, these contributions advance the study of design reasoning and support explainable AI by enabling transparent evaluation of both human- and AI-generated design justifications. The results offer guidance for improving design rationales written by both human designers and AI systems, along with a data-driven approach for assessing rationale quality across different capture methods.