This article isn’t a new publication but I thought I’d provide some information about it here. I did this work by analyzing email communication between team members within a large, multinational organization: almost 5000 emails in all, sent all across the organization.
We found that many email discussions involved people who were included in the discussion thread only after the first email was sent! This was surprising because I thought, initially, that if you emailed people about a topic you’d put all of them in the To/CC of the first message. Instead, in this organization, in 57% of the threads someone added a new recipient to the To/CC list as the thread went on.
In addition, I examined the messages and identified four main situations why emergence occurred:
- Crisis: There was a big crisis situation, and the message was being passed to as many people as possible so that someone, anyone, might have information that will help.
- Explicit requests: In the discussion, there was a specific request that a person who was not initially included in the message be involved or undergo a task. This is quite common for expertise-seeking; some people would realise that they couldn’t solve a problem and CC a third-party for help.
- Announcements: Announcements were large-scale announcements of some sort, and had to reach large numbers of people.
- Following-up: After a particular event, a message would be sent following up on the event. If there were people involved in the event who were not initially invited, they were included on follow-up emails.
There were a number of takeaways that affect my email habits even now – I try to ensure that people are CCed right from the start, and if someone asks me to recommend someone they should talk to, rather than simply telling them that they should speak with Person X, I actually CC Person X as part of my reply.
I recently had an opportunity to work on an interesting paper about how end users apply sensemaking when debugging. In this paper, we analyzed how end users working on real-world spreadsheets identified and fixed errors using a model known as sensemaking.
Sensemaking is a process that people use to learn information from artifacts and to in turn make hypotheses based on the information that they acquired. In sensemaking, people forage for information through interacting with the artifacts (in this case, data and formulas) and then form hypotheses and test them.
One of the main results of this paper is that we come up with a sensemaking model for end-user debuggers. One of the extensions that we proposed is three loops: the “Bug Fixing” sensemaking loop, which is similar to Pirolli and Card’s sensemaking loop, the “Environment” sensemaking loop, and the “Domain” sensemaking loop. Participants usually left the bug fixing loop to head to the environment loop – essentially, they were struggling with Excel, or using some information from Excel to try to move forward with their task.
We also examined in detail how participants moved between different steps while sensemaking. The participants who did well at the task used systematic strategies, and followed up their initial foraging with testing their initial hypotheses. Even though they used two different strategies (selective = depth-first investigation vs. comprehensive = breadth-first investigation), they were both able to do well because they were systematic in their debugging work.
Just yesterday at CHI2012 (ACM Conference on Computer-Human Interaction) in Austin, TX, my colleague Todd Kulesza presented our paper! Unfortunately I couldn’t be there but I’m sure that it went well. This paper was not only accepted at CHI, but it also received an honorable mention, which is absolutely spectacular.
This paper was the second project I helped with over at Oregon State University and is about how inducing a mental model in end users through training can enable them to more efficiently correct the mistakes of an intelligent agent – that is, a machine-learning system that assists users by making recommendations. The experiment that we examined was a music recommendation system. By providing instruction to these end users about the details of how these agents make decisions, the users felt that the cost-benefit ratio of making suggestions was a better use of their time and they had a more positive experience using the system overall.
Check this out, it should be appearing soon in the ACM digital library below. I’ll update it with the ACM Author-ize link when ACM provides one 🙂
Todd Kulesza, Simone Stumpf, Margaret Burnett, and Irwin Kwan. Tell Me More? The Effects of Mental Model Soundness on Personalizing an Intelligent Agent. ACM Conference on Computer-Human Interaction 2012, Austin, USA.
Update: I heard from Dr. Burnett that Todd’s talk was fantastic! I also heard that this paper is on Page 1 of the CHI 2012 proceedings.
Adrian at the ACM Conference on Computer-Supported Collaborative Work is presenting “To Talk or Not to Talk: Factors that Influence Communication around Changesets”. He went to Zurich to work with the IBM Rational Team Concert team located there, and interviewed them, applied surveys, and did personal observations. He found out that:
- Release: Their discussions often were affected by their time within the release cycle. Early on in the cycle, developers were concerned about features, but as time went on, they were more concerned about the software being released, and were much more cautious about the change sets that were being applied.
- Perception: The perception around the change set was also important. If the developer was giving off a good impression, then colleagues would monitor them less. Alternatively, if a developer is giving off a poor impression, then their changes may be more heavily scrutinized.
- Risk Assessment: The developers were concerned about risk. High-risk change sets heavily encouraged developers to speak with each other. For example, if the change set was large, it was considered higher risk.
- Business Goals: The developers were often conscientious about code quality, but were always under pressure to release features and fix bugs. This leads to the phenomenon known as technical debt, where the developers know that the fix is inelegant and ugly, but are often unable to fix it next cycle because management pressure continues to push the developers to release more features.
These considerations may have implications on collaborative recommender tools because they suggest the contexts under which the recommender system may have to adapt itself toward.
Adrian’s posted his slides here so you can take a look!
A poster describing the potential application of Information Foraging Theory to the way people seek information in social collaborative software development settings.
This is a preview of my poster that will be presented at the Future of Collaborative Software Engineering workshop held in conjunction with the Conference on Computer Supported Collaborative Work 2012 in Seattle (Feb. 11-15).
The poster explores how a theory of how people forage for information in their environments might be applied to a social setting where information may be in people’s heads as well as in the artifacts that they work on.
Information foraging theory in general is a theory that postulates that people search for information in a way similar to how foragers in the wild search for food. In an environment, the forager wants to maximize the amount of high-value food for as low a cost as possible. In addition, indicators in the environment, like cues, suggest to the forager where high-yield places may be.