Tag Archives: intelligent agents

Papers Accepted to IEEE Visual Languages/Human-Centric Computing (VL/HCC)

Good news! We received notification today about two papers accepted to VL/HCC later this year. Here are the paper titles and abstracts. When the camera-ready preprints are ready, I’ll be sure to post those as well.

Helping End Users Help Themselves with Idea Gardening

J. Cao, I. Kwan, F. Bahmani, M. Burnett, J. Jordahl, A. Horvath, S. Fleming and S. Yang. End-User Programmers in Trouble: Can the Idea Garden Help Them to Help Themselves? to appear in the IEEE Conference on Visual Languages and Human-Centric Computing (VL/HCC), San Jose, USA, 2013

Abstract—End user programmers often get stuck because they do not know how to overcome their barriers. We have previously presented an approach called the Idea Garden, which makes minimalist, on-demand problem-solving support available to end user programmers in trouble. Its goal is to encourage end users to help themselves learn how to overcome programming difficulties as they encounter them. In this paper, we investigate whether the Idea Garden approach helps end-user programmers problem-solve their programs on their own. We ran a statistical experiment with 123 end-user programmers. The experiment’s results showed that, even when the Idea Garden was no longer available, participants with little knowledge of programming who previously used the Idea Garden were able to produce higher-quality programs than those who had not used the Idea Garden.

Keywords—Idea Garden; end-user programming; problem solving; barriers; mashups; quantitative empirical evaluation

User Interface Explanations in Intelligent Agents

T. Kulesza, S. Stumpf, M. Burnett, S. Yang, I. Kwan and W.-K. Wong. Too Much, Too Little, or Just Right? Ways Explanations Impact End Users’ Mental Models, to appear in the IEEE Conference on Visual Languages and Human-Centric Computing (VL/HCC), San Jose, USA, 2013

Abstract—Research is emerging on how end users can correct mistakes their intelligent agents make, but before users can correctly “debug” an intelligent agent, they need some degree of understanding of how it works. In this paper we consider ways intelligent agents should explain themselves to end users, especially focusing on how the soundness and completeness of the explanations impacts the fidelity of end users’ mental models. Our findings suggest that completeness is more important than soundness: increasing completeness via certain information types helped participants’ mental models and, surprisingly, their perception of the cost/benefit tradeoff of attending to the explanations. We also found that oversimplification, as per many commercial agents, can be a problem: when soundness was very low, participants experienced more mental demand and lost trust in the explanations, thereby reducing the likelihood that users will pay attention to such explanations at all.

Keywords—mental models; explanations; end-user debugging; recommender systems; intelligent agents

Advertisements

Tell me more?: the effects of mental model soundness on personalizing an intelligent agent

ACM DL Author-ize serviceTell me more?: the effects of mental model soundness on personalizing an intelligent agent

Todd Kulesza, Simone Stumpf, Margaret Burnett, Irwin Kwan
CHI ’12 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2012

Just yesterday at CHI2012 (ACM Conference on Computer-Human Interaction) in Austin, TX, my colleague Todd Kulesza presented our paper! Unfortunately I couldn’t be there but I’m sure that it went well. This paper was not only accepted at CHI, but it also received an honorable mention, which is absolutely spectacular.

This paper was the second project I helped with over at Oregon State University and is about how inducing a mental model in end users through training can enable them to more efficiently correct the mistakes of an intelligent agent – that is, a machine-learning system that assists users by making recommendations. The experiment that we examined was a music recommendation system. By providing instruction to these end users about the details of how these agents make decisions, the users felt that the cost-benefit ratio of making suggestions was a better use of their time and they had a more positive experience using the system overall.

Check this out, it should be appearing soon in the ACM digital library below. I’ll update it with the ACM Author-ize link when ACM provides one 🙂

Todd Kulesza, Simone Stumpf, Margaret Burnett, and Irwin Kwan. Tell Me More? The Effects of Mental Model Soundness on Personalizing an Intelligent Agent. ACM Conference on Computer-Human Interaction 2012, Austin, USA.

Update: I heard from Dr. Burnett that Todd’s talk was fantastic! I also heard that this paper is on Page 1 of the CHI 2012 proceedings.