Multi-method evaluation of adaptive systems

Abstract

When evaluating personalized or adaptive systems, we frequently rely on one single evaluation objective and one single method. This remains us with “blind spots”. A comprehensive evaluation may require a thoughtful integration of multiple methods. This tutorial (i) demonstrates the wide variety of dimensions to be eval- uated, (ii) outlines the methodological approaches to evaluate these dimensions, (iii) pinpoints the blind spots when using only one ap- proach, (iv) demonstrates the benefits of multi-method evaluation, and (v) outlines the basic options how multiple methods can be integrated into one evaluation design. Participants familiarize with the wide spectrum of opportunities how adaptive or personalized systems may be evaluated, and have the opportunity to come up with evaluation designs that comply with the four basic options of multi-method evaluation. The ultimate learning objective is to stimulate the critical reflection of one’s own evaluation practices and those of the community at large.

Publication
29th Conference on User Modeling, Adaptation and Personalization (UMAP 2021)