Papers Accepted to IEEE Visual Languages/Human-Centric Computing (VL/HCC)

Good news! We received notification today about two papers accepted to VL/HCC later this year. Here are the paper titles and abstracts. When the camera-ready preprints are ready, I’ll be sure to post those as well.

Helping End Users Help Themselves with Idea Gardening

J. Cao, I. Kwan, F. Bahmani, M. Burnett, J. Jordahl, A. Horvath, S. Fleming and S. Yang. End-User Programmers in Trouble: Can the Idea Garden Help Them to Help Themselves? to appear in the IEEE Conference on Visual Languages and Human-Centric Computing (VL/HCC), San Jose, USA, 2013

Abstract—End user programmers often get stuck because they do not know how to overcome their barriers. We have previously presented an approach called the Idea Garden, which makes minimalist, on-demand problem-solving support available to end user programmers in trouble. Its goal is to encourage end users to help themselves learn how to overcome programming difficulties as they encounter them. In this paper, we investigate whether the Idea Garden approach helps end-user programmers problem-solve their programs on their own. We ran a statistical experiment with 123 end-user programmers. The experiment’s results showed that, even when the Idea Garden was no longer available, participants with little knowledge of programming who previously used the Idea Garden were able to produce higher-quality programs than those who had not used the Idea Garden.

Keywords—Idea Garden; end-user programming; problem solving; barriers; mashups; quantitative empirical evaluation

User Interface Explanations in Intelligent Agents

T. Kulesza, S. Stumpf, M. Burnett, S. Yang, I. Kwan and W.-K. Wong. Too Much, Too Little, or Just Right? Ways Explanations Impact End Users’ Mental Models, to appear in the IEEE Conference on Visual Languages and Human-Centric Computing (VL/HCC), San Jose, USA, 2013

Abstract—Research is emerging on how end users can correct mistakes their intelligent agents make, but before users can correctly “debug” an intelligent agent, they need some degree of understanding of how it works. In this paper we consider ways intelligent agents should explain themselves to end users, especially focusing on how the soundness and completeness of the explanations impacts the fidelity of end users’ mental models. Our findings suggest that completeness is more important than soundness: increasing completeness via certain information types helped participants’ mental models and, surprisingly, their perception of the cost/benefit tradeoff of attending to the explanations. We also found that oversimplification, as per many commercial agents, can be a problem: when soundness was very low, participants experienced more mental demand and lost trust in the explanations, thereby reducing the likelihood that users will pay attention to such explanations at all.

Keywords—mental models; explanations; end-user debugging; recommender systems; intelligent agents

Advertisement

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s