Monthly Archives: May 2012

End-user Debugging Strategies: A Sensemaking Perspective

I recently had an opportunity to work on an interesting paper about how end users apply sensemaking when debugging. In this paper, we analyzed how end users working on real-world spreadsheets identified and fixed errors using a model known as sensemaking.

Sensemaking is a process that people use to learn information from artifacts and to in turn make hypotheses based on the information that they acquired. In sensemaking, people forage for information through interacting with the artifacts (in this case, data and formulas) and then form hypotheses and test them.

One of the main results of this paper is that we come up with a sensemaking model for end-user debuggers. One of the extensions that we proposed is three loops: the “Bug Fixing” sensemaking loop, which is similar to Pirolli and Card’s sensemaking loop, the “Environment” sensemaking loop, and the “Domain” sensemaking loop. Participants usually left the bug fixing loop to head to the environment loop – essentially, they were struggling with Excel, or using some information from Excel to try to move forward with their task.

We also examined in detail how participants moved between different steps while sensemaking. The participants who did well at the task used systematic strategies, and followed up their initial foraging with testing their initial hypotheses. Even though they used two different strategies (selective = depth-first investigation vs. comprehensive = breadth-first investigation), they were both able to do well because they were systematic in their debugging work.

ACM DL Author-ize serviceEnd-user debugging strategies: A sensemaking perspective

Valentina Grigoreanu, Margaret Burnett, Susan Wiedenbeck, Jill Cao, Kyle Rector, Irwin Kwan
ACM Transactions on Computer-Human Interaction (TOCHI), 2012

Advertisements

Tell me more?: the effects of mental model soundness on personalizing an intelligent agent

ACM DL Author-ize serviceTell me more?: the effects of mental model soundness on personalizing an intelligent agent

Todd Kulesza, Simone Stumpf, Margaret Burnett, Irwin Kwan
CHI ’12 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2012

Just yesterday at CHI2012 (ACM Conference on Computer-Human Interaction) in Austin, TX, my colleague Todd Kulesza presented our paper! Unfortunately I couldn’t be there but I’m sure that it went well. This paper was not only accepted at CHI, but it also received an honorable mention, which is absolutely spectacular.

This paper was the second project I helped with over at Oregon State University and is about how inducing a mental model in end users through training can enable them to more efficiently correct the mistakes of an intelligent agent – that is, a machine-learning system that assists users by making recommendations. The experiment that we examined was a music recommendation system. By providing instruction to these end users about the details of how these agents make decisions, the users felt that the cost-benefit ratio of making suggestions was a better use of their time and they had a more positive experience using the system overall.

Check this out, it should be appearing soon in the ACM digital library below. I’ll update it with the ACM Author-ize link when ACM provides one 🙂

Todd Kulesza, Simone Stumpf, Margaret Burnett, and Irwin Kwan. Tell Me More? The Effects of Mental Model Soundness on Personalizing an Intelligent Agent. ACM Conference on Computer-Human Interaction 2012, Austin, USA.

Update: I heard from Dr. Burnett that Todd’s talk was fantastic! I also heard that this paper is on Page 1 of the CHI 2012 proceedings.