Category Archives: Research

Academics: Do You Program a Lot?

I know many Ph.D candidates and professors who can program and do program on a regular basis, but I didn’t really consider how often most of these students spend their time programming.

During an academic job interview, I was asked if I programmed a lot. Yes, I program. Do I do it a lot? Well, not exactly. None of the projects I currently work on rely on my programming skills, but I use programmatic thinking frequently. Like most computer science students, I write short scripts for frequently-used tasks. I build my CS361 web site using a shell script, Mustache and JQuery. I write 50-line Python programs to generate level templates for a research project I’m working on. I fix Javascript bugs here and there. I write R scripts to make my data analysis repeatable. But I don’t program like a programmer working in industry would program. I’m very much an end-user programmer, now. Not a novice programmer – an end user.

These end user programmers are the very people we usually assume have no formal computer science background but need to engage in programmatic thinking. Still – with all of this exploration and discovery, I would be hard-pressed to say that I program 50% of the time at work, even.

Most of my time these days is spent writing and designing materials, as well as assisting other students with analysis of their data; I also spend time preparing materials for the Software Engineering I class that I teach this term. Where in that do I find time to program? Generally, I don’t, so most of my programming is relegated to my free time. Perhaps I am not efficient with my free time. I often try to spend time learning frameworks and toolkits that I know about but haven’t worked with extensively or I try to find tools that may help me immediately or in the future. Lately, I’ve also found that I’ve been programming for the pure fun of it – doing projects in Processing or trying to learn live coding in Clojure.

Thus I come out of this post with two questions. How many of you out there have a programming background, but program now as an “end user”, that is, the software products that you build are not the deliverable, but instead they help you get other deliverables out the door? Second, how many people in academia program “a lot”, perhaps, let’s say, program for more than 40% of their work time and 40% of their free time?

Advertisements

CHI2013 Paper Accepted: The Whats and Hows of Programmers’ Foraging Diets

In more news from the conference acceptance front, our CHI paper “The Whats and Hows of Programmers’ Foraging Diets” has also been accepted. This paper examines how programmers forage for information while they are debugging and in particular pays specific attention to the types of information they seek. These participants were trying to track down a Java bug in JEdit using Eclipse.

The findings of this paper include the fact that participants used very diverse strategies to pursue the same task, that the participants’ enrichment strategies of searching and breakpoint debugging (that is, modifying the environment by providing information) were very repetitive, and the fact that participants often foraged for information within a single information patch (especially participants who scanned the package explorer and the outline view thoroughly).

I’ll do a more detailed writeup on this paper soon, when the camera-ready version is prepared!

D. Piorkowski, S. D. Fleming, I. Kwan, M. Burnett, C. Scaffidi, R. Bellamy, J. Jordhal. The Whats and Hows of Programmers’ Foraging Diets, to appear in ACM Conference on Human-Computer Interaction (CHI), Paris, France, 2013.

ICSE13 paper accepted: The Role of Domain Knowledge and Hierarchical Control Structures in Socio-Technical Coordination

The official notifications for the International Conference on Software Engineering (2013) have been sent out. ICSE is an archival conference that is one of the top conferences in the field. This year, there was an 18.5% acceptance rate.

The paper is about how the presence of domain knowledge among team members affects how people coordinate in a software team. In addition, many of these teams have other hierarchical structures in place and recommend that certain people limit communications with others to follow team boundaries. We investigated two projects in a large global software organization and contrasted how they structured their teams and thus the resulting communication patterns. Some of the techniques they used to “spread” domain knowledge in the team were by incorporating new hires into the project, rotating roles, and making knowledgeable team members easily reachable.

I’ll give a detailed account of the paper when we get in our camera-ready version (which isn’t due until March)!

D. Damian, R. Helms, I. Kwan, S. Marczak, B. Koelewijn. The Role of Domain Knowledge and Hierarchical Control Structures in Socio-Technical Coordination, to appear in IEEE International Conference on Software Engineering (ICSE), San Francisco, USA, 2013.

The hidden experts in software-engineering communication (NIER track)

This article isn’t a new publication but I thought I’d provide some information about it here. I did this work by analyzing email communication between team members within a large, multinational organization: almost 5000 emails in all, sent all across the organization.

We found that many email discussions involved people who were included in the discussion thread only after the first email was sent! This was surprising because I thought, initially, that if you emailed people about a topic you’d put all of them in the To/CC of the first message. Instead, in this organization, in 57% of the threads someone added a new recipient to the To/CC list as the thread went on.

In addition, I examined the messages and identified four main situations why emergence occurred:

  • Crisis: There was a big crisis situation, and the message was being passed to as many people as possible so that someone, anyone, might have information that will help.
  • Explicit requests: In the discussion, there was a specific request that a person who was not initially included in the message be involved or undergo a task. This is quite common for expertise-seeking; some people would realise that they couldn’t solve a problem and CC a third-party for help.
  • Announcements: Announcements were large-scale announcements of some sort, and had to reach large numbers of people.
  • Following-up: After a particular event, a message would be sent following up on the event. If there were people involved in the event who were not initially invited, they were included on follow-up emails.

There were a number of takeaways that affect my email habits even now – I try to ensure that people are CCed right from the start, and if someone asks me to recommend someone they should talk to, rather than simply telling them that they should speak with Person X, I actually CC Person X as part of my reply.

ACM DL Author-ize serviceThe hidden experts in software-engineering communication (NIER track)

Irwin Kwan, Daniela Damian
ICSE ’11 Proceedings of the 33rd International Conference on Software Engineering, 2011

End-user Debugging Strategies: A Sensemaking Perspective

I recently had an opportunity to work on an interesting paper about how end users apply sensemaking when debugging. In this paper, we analyzed how end users working on real-world spreadsheets identified and fixed errors using a model known as sensemaking.

Sensemaking is a process that people use to learn information from artifacts and to in turn make hypotheses based on the information that they acquired. In sensemaking, people forage for information through interacting with the artifacts (in this case, data and formulas) and then form hypotheses and test them.

One of the main results of this paper is that we come up with a sensemaking model for end-user debuggers. One of the extensions that we proposed is three loops: the “Bug Fixing” sensemaking loop, which is similar to Pirolli and Card’s sensemaking loop, the “Environment” sensemaking loop, and the “Domain” sensemaking loop. Participants usually left the bug fixing loop to head to the environment loop – essentially, they were struggling with Excel, or using some information from Excel to try to move forward with their task.

We also examined in detail how participants moved between different steps while sensemaking. The participants who did well at the task used systematic strategies, and followed up their initial foraging with testing their initial hypotheses. Even though they used two different strategies (selective = depth-first investigation vs. comprehensive = breadth-first investigation), they were both able to do well because they were systematic in their debugging work.

ACM DL Author-ize serviceEnd-user debugging strategies: A sensemaking perspective

Valentina Grigoreanu, Margaret Burnett, Susan Wiedenbeck, Jill Cao, Kyle Rector, Irwin Kwan
ACM Transactions on Computer-Human Interaction (TOCHI), 2012

Tell me more?: the effects of mental model soundness on personalizing an intelligent agent

ACM DL Author-ize serviceTell me more?: the effects of mental model soundness on personalizing an intelligent agent

Todd Kulesza, Simone Stumpf, Margaret Burnett, Irwin Kwan
CHI ’12 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2012

Just yesterday at CHI2012 (ACM Conference on Computer-Human Interaction) in Austin, TX, my colleague Todd Kulesza presented our paper! Unfortunately I couldn’t be there but I’m sure that it went well. This paper was not only accepted at CHI, but it also received an honorable mention, which is absolutely spectacular.

This paper was the second project I helped with over at Oregon State University and is about how inducing a mental model in end users through training can enable them to more efficiently correct the mistakes of an intelligent agent – that is, a machine-learning system that assists users by making recommendations. The experiment that we examined was a music recommendation system. By providing instruction to these end users about the details of how these agents make decisions, the users felt that the cost-benefit ratio of making suggestions was a better use of their time and they had a more positive experience using the system overall.

Check this out, it should be appearing soon in the ACM digital library below. I’ll update it with the ACM Author-ize link when ACM provides one 🙂

Todd Kulesza, Simone Stumpf, Margaret Burnett, and Irwin Kwan. Tell Me More? The Effects of Mental Model Soundness on Personalizing an Intelligent Agent. ACM Conference on Computer-Human Interaction 2012, Austin, USA.

Update: I heard from Dr. Burnett that Todd’s talk was fantastic! I also heard that this paper is on Page 1 of the CHI 2012 proceedings.

To Talk or Not to Talk: Factors that Influence Communication around Changesets

Adrian at the ACM Conference on Computer-Supported Collaborative Work is presenting “To Talk or Not to Talk: Factors that Influence Communication around Changesets”. He went to Zurich to work with the IBM Rational Team Concert team located there, and interviewed them, applied surveys, and did personal observations. He found out that:

  1. Release: Their discussions often were affected by their time within the release cycle. Early on in the cycle, developers were concerned about features, but as time went on, they were more concerned about the software being released, and were much more cautious about the change sets that were being applied.
  2. Perception: The perception around the change set was also important. If the developer was giving off a good impression, then colleagues would monitor them less. Alternatively, if a developer is giving off a poor impression, then their changes may be more heavily scrutinized.
  3. Risk Assessment: The developers were concerned about risk. High-risk change sets heavily encouraged developers to speak with each other. For example, if the change set was large, it was considered higher risk.
  4. Business Goals: The developers were often conscientious about code quality, but were always under pressure to release features and fix bugs. This leads to the phenomenon known as technical debt, where the developers know that the fix is inelegant and ugly, but are often unable to fix it next cycle because management pressure continues to push the developers to release more features.

These considerations may have implications on collaborative recommender tools because they suggest the contexts under which the recommender system may have to adapt itself toward.

Adrian’s posted his slides here so you can take a look!

ACM DL Author-ize serviceTo talk or not to talk: factors that influence communication around changesets

Adrian Schröter, Jorge Aranda, Daniela Damian, Irwin Kwan
CSCW ’12 Proceedings of the ACM 2012 conference on Computer Supported Cooperative Work, 2012