Multi-Method Evaluation of Adaptive Systems

Abstract

When evaluating personalized or adaptive systems, we frequently rely on one single evaluation objective and one single method. This remains us with “blind spots”. A comprehensive evaluation may require a thoughtful integration of multiple methods.
This tutorial will (i) demonstrate the wide variety of dimensions to be evaluated, (ii) outline the methodological approaches to evaluate these dimensions, (iii) pinpoint the blind spots when using only one approach, (iv) demonstrate the benefits of multi-method evaluation, (v) and outline the basic options how multiple methods can be integrated into one evaluation design. Participants will familiarize with the wide spectrum of opportunities how adaptive or personalized systems may be evaluated, and will be able to come up with evaluation designs that comply with the four basic options of multi-method evaluation.
The ultimate learning objective is to stimulate the critical reflection of one’s own evaluation practices and those of the community at large.

Date
25 June 2021
Location
Online
(2021). Multi-method evaluation of adaptive systems. 29th Conference on User Modeling, Adaptation and Personalization (UMAP 2021).

PDF Cite Project Slides DOI ACM Author-izer CORE 2023: B GGS 2021: B