HCI-outcomes of AI-based decision support in an economic decision task

AI has the potential to support human decision making in everyday situations. However, in sensitive areas, most state-of-the art algorithms face several challenges, such as algorithmic accountability, ethical considerations, and user acceptance. How should human and machine judgment be combined to tackle these challenges [1]? Should we work on more explainable AI-systems [2]? Should we simplify algorithmic decision support, which could also make algorithms more robust against uncertain environments [3]? Or should we search for an efficient interaction between human and machine, where the prior controls the output of the latter [4]? In collaboration with the student, we will conduct an experiment (e.g., [5]), to explore the interplay between human and AI-based decisions. By doing that, we could compare diverse AI-algorithms (e.g., CNN, Logistic Regression, Decision Tree) or decision environments (e.g., risk vs. uncertainty) in regard to diverse human-machine interaction outcomes, such as performance, comfort, and acceptance.

Keywords: Human-Computer-Interaction, Machine Learning, Transparent AI, Decision Making

Tasks (Scope depends on the type of Thesis)

Literature review;
Designing the user study (e.g., Tailorshop Decision Experiment);
Implementing diverse AI-based decision support systems;
Designing and Implementing the AI;
Collecting and analyzing user data;
Evaluating with HCI outcomes;
Comparing the Pros and Cons of AI-decision support;

What we offer

Access to a large pool of participants;
Professional advice in terms of Data Science and Hardware;
A pleasant working atmosphere and constructive cooperation;
Chances to publish your work on top conference;
Research at the intersection between Psychology and Technology;

Qualification

Proactive and communicative work style;
Good English reading and writing;
Machine Learning;
Interest in working with Earable devices and interdisciplinary work;

Interested? Please contact: Tim Schneegans (schneegans@teco.edu)

References

[1] Aleksandra Litvinova. Extending the wisdom of crowds: how to harness the wisdom of the inner crowd. PhD thesis, 2020.
[2] Himabindu Lakkaraju, Stephen H Bach, and Jure Leskovec. Interpretable decision sets: A joint framework for description and prediction. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1675–1684, 2016.
[3] Jongbin Jung, Connor Concannon, Ravi Shroff, Sharad Goel, and Daniel G Goldstein. Simple rules for complex decisions. arXiv preprint arXiv:1702.04690, 2017.
[4] Julio Borges, Matthias Budde, Oleg Peters, Till Riedel, and Michael Beigl. Towards two-tier citizen sensing. In 2016 IEEE International Smart Cities Conference (ISC2), pages 1–4. IEEE, 2016.
[5] Daniel Danner, Dirk Hagemann, Daniel V Holt, Marieke Hager, Andrea Schankin, Sascha Wüstenberg, and Joachim Funke. Measuring performance in dynamic decision making. Journal of Individual Differences, 2011.