Informing Users about Data Imputation. Exploring the Design Space for Dealing With Non-Responses

Abstract

Machine learning algorithms often require quantitative ratings from users to effectively predict helpful content. When these ratings are unavailable, systems make implicit assumptions or imputations to fill in the missing information; however, users are generally kept unaware of these processes. In our work, we explore ways of informing the users about system imputations and experiment with imputed ratings and various explanations required by users to correct imputations. We investigate these approaches through the deployment of a text messaging probe to 26 participants to help them manage their psychological wellbeing. We provide quantitative results to report users’ reactions to correct vs. incorrect imputations and the potential risks of biasing their ratings. Using semi-structured interviews with participants, we characterize the potential trade-offs regarding user autonomy and draw insights about alternative ways of involving users in the imputation process. Our findings provide useful directions for future research on communicating imputation and interpreting user non-responses.

Publication
Proceedings at 11th AAAI Conference on Human Computation and Crowdsourcing HCOMP 2023