Tag Archives: end-user programming

Papers Accepted to IEEE Visual Languages/Human-Centric Computing (VL/HCC)

Good news! We received notification today about two papers accepted to VL/HCC later this year. Here are the paper titles and abstracts. When the camera-ready preprints are ready, I’ll be sure to post those as well.

Helping End Users Help Themselves with Idea Gardening

J. Cao, I. Kwan, F. Bahmani, M. Burnett, J. Jordahl, A. Horvath, S. Fleming and S. Yang. End-User Programmers in Trouble: Can the Idea Garden Help Them to Help Themselves? to appear in the IEEE Conference on Visual Languages and Human-Centric Computing (VL/HCC), San Jose, USA, 2013

Abstract—End user programmers often get stuck because they do not know how to overcome their barriers. We have previously presented an approach called the Idea Garden, which makes minimalist, on-demand problem-solving support available to end user programmers in trouble. Its goal is to encourage end users to help themselves learn how to overcome programming difficulties as they encounter them. In this paper, we investigate whether the Idea Garden approach helps end-user programmers problem-solve their programs on their own. We ran a statistical experiment with 123 end-user programmers. The experiment’s results showed that, even when the Idea Garden was no longer available, participants with little knowledge of programming who previously used the Idea Garden were able to produce higher-quality programs than those who had not used the Idea Garden.

Keywords—Idea Garden; end-user programming; problem solving; barriers; mashups; quantitative empirical evaluation

User Interface Explanations in Intelligent Agents

T. Kulesza, S. Stumpf, M. Burnett, S. Yang, I. Kwan and W.-K. Wong. Too Much, Too Little, or Just Right? Ways Explanations Impact End Users’ Mental Models, to appear in the IEEE Conference on Visual Languages and Human-Centric Computing (VL/HCC), San Jose, USA, 2013

Abstract—Research is emerging on how end users can correct mistakes their intelligent agents make, but before users can correctly “debug” an intelligent agent, they need some degree of understanding of how it works. In this paper we consider ways intelligent agents should explain themselves to end users, especially focusing on how the soundness and completeness of the explanations impacts the fidelity of end users’ mental models. Our findings suggest that completeness is more important than soundness: increasing completeness via certain information types helped participants’ mental models and, surprisingly, their perception of the cost/benefit tradeoff of attending to the explanations. We also found that oversimplification, as per many commercial agents, can be a problem: when soundness was very low, participants experienced more mental demand and lost trust in the explanations, thereby reducing the likelihood that users will pay attention to such explanations at all.

Keywords—mental models; explanations; end-user debugging; recommender systems; intelligent agents

Academics: Do You Program a Lot?

I know many Ph.D candidates and professors who can program and do program on a regular basis, but I didn’t really consider how often most of these students spend their time programming.

During an academic job interview, I was asked if I programmed a lot. Yes, I program. Do I do it a lot? Well, not exactly. None of the projects I currently work on rely on my programming skills, but I use programmatic thinking frequently. Like most computer science students, I write short scripts for frequently-used tasks. I build my CS361 web site using a shell script, Mustache and JQuery. I write 50-line Python programs to generate level templates for a research project I’m working on. I fix Javascript bugs here and there. I write R scripts to make my data analysis repeatable. But I don’t program like a programmer working in industry would program. I’m very much an end-user programmer, now. Not a novice programmer – an end user.

These end user programmers are the very people we usually assume have no formal computer science background but need to engage in programmatic thinking. Still – with all of this exploration and discovery, I would be hard-pressed to say that I program 50% of the time at work, even.

Most of my time these days is spent writing and designing materials, as well as assisting other students with analysis of their data; I also spend time preparing materials for the Software Engineering I class that I teach this term. Where in that do I find time to program? Generally, I don’t, so most of my programming is relegated to my free time. Perhaps I am not efficient with my free time. I often try to spend time learning frameworks and toolkits that I know about but haven’t worked with extensively or I try to find tools that may help me immediately or in the future. Lately, I’ve also found that I’ve been programming for the pure fun of it – doing projects in Processing or trying to learn live coding in Clojure.

Thus I come out of this post with two questions. How many of you out there have a programming background, but program now as an “end user”, that is, the software products that you build are not the deliverable, but instead they help you get other deliverables out the door? Second, how many people in academia program “a lot”, perhaps, let’s say, program for more than 40% of their work time and 40% of their free time?

End-user Debugging Strategies: A Sensemaking Perspective

I recently had an opportunity to work on an interesting paper about how end users apply sensemaking when debugging. In this paper, we analyzed how end users working on real-world spreadsheets identified and fixed errors using a model known as sensemaking.

Sensemaking is a process that people use to learn information from artifacts and to in turn make hypotheses based on the information that they acquired. In sensemaking, people forage for information through interacting with the artifacts (in this case, data and formulas) and then form hypotheses and test them.

One of the main results of this paper is that we come up with a sensemaking model for end-user debuggers. One of the extensions that we proposed is three loops: the “Bug Fixing” sensemaking loop, which is similar to Pirolli and Card’s sensemaking loop, the “Environment” sensemaking loop, and the “Domain” sensemaking loop. Participants usually left the bug fixing loop to head to the environment loop – essentially, they were struggling with Excel, or using some information from Excel to try to move forward with their task.

We also examined in detail how participants moved between different steps while sensemaking. The participants who did well at the task used systematic strategies, and followed up their initial foraging with testing their initial hypotheses. Even though they used two different strategies (selective = depth-first investigation vs. comprehensive = breadth-first investigation), they were both able to do well because they were systematic in their debugging work.

ACM DL Author-ize serviceEnd-user debugging strategies: A sensemaking perspective

Valentina Grigoreanu, Margaret Burnett, Susan Wiedenbeck, Jill Cao, Kyle Rector, Irwin Kwan
ACM Transactions on Computer-Human Interaction (TOCHI), 2012