“What people say, what people do, and what people say they do are entirely different things.” – Margaret Mead
Observation is an important part of mixed methods user research. Whether observing people using a technology or system in the workplace, at home, or out in public, it can provide insights not available in quantitative data. As reliable and valid as hard metrics often are, they can also mislead when human dynamics interplay in unexpected ways. An interaction paradigm that works well in a controlled environment could become problematic when coupled with the distractions of work and everyday life.
Observational research is particularly useful during the early phase of product or feature discovery, to uncover the pain points of existing solutions and to discover opportunities worth exploring further to serve unmet needs. Observation can reveal insights so obvious or commonplace to the subjects that they often fail to mention them to the researcher during other forms of qualitative research such as interviews or contextual inquiry.
Example: Metadata platform for content distribution.
When building a metadata platform, designed to power a new media business, a series of customer interviews with the content producers were conducted to understand the workflow and develop a solution more effective than the existing patchwork of tools.
We listened and diligently documented how they created and captured series and episodic metadata, and how it flowed through the ecosystem. That data determined how easily viewers could search and discover the shows, and be surfaced through recommendations, as such, metadata systems suffer from the adage, “garbage in, garbage out.” Put simply, metadata matters. Those findings were then complemented with field observations, observing the producers perform the tasks they had previously described.
A curious discovery was made.
Due to the volume of content being produced, the metadata task was often outsourced to a junior. In other words, the producers we interviewed were not the target users. They were more often the benefactors of good, clean metadata, but they didn’t mention this detail because it didn’t occur to them that it mattered who entered the data.
But it does.
Paradoxically, the junior team members handling this thankless, yet important, task didn’t benefit from its outcome. Whenever you have a mismatch between the user group doing the work and the user group receiving the benefits, arguably, it should change your design approach. Without observational research this crucial information would have been missed.
It changed our perspective on how much effort to dedicate to that part of the platform, designing for maximum usability and minimal errors. We embedded familiar patterns and migrated as many tasks to the system as possible to reduce cognitive load. We provided multiple ways of completing a task and allowed users to modify the system for their use. And we prevented errors wherever possible, while allowing users to recover quickly from mistakes.
It may seem like a waste of effort making something as banal as data entry a delightful experience. But it meant that the people doing the work, those who received little benefit from it, could accomplish it with minimal effort. This ensured that throughout the entire chain the most accurate metadata possible was being shared, enabling distribution to all of the major streaming platforms.
Observational research is a great tool within mixed methods. To increase the value and validity of the findings, some best practices should be considered.
1 Capture Context
- Make a note of what is happening in the field location. Describe the site as much as possible; ambience, atmosphere, environment, tools, technologies, physical characteristics.
- Where appropriate, provide a visual record through photographs or video. In scenarios where ethics prevents this, make a sketch of the site. Draw the environment in detail.
- With closed eyes describe what can be heard, how it changes the perception of the location, and document any new information it reveals.
2 Increase Awareness
- Pay attention to feelings, assumptions, and biases, and how they might be influencing what is being observed and documented.
- Evaluate how diverse and inclusive the research design process is; who is observing and who is being observed. Aim for diversity and inclusion for maximum robustness.
- Remove any thoughts of possible solutions as it will influence observations and introduce bias.
3 Document Behaviour
- Describe the humans being observed, both their characteristics and the tasks they are undertaking.
- Document the goals they are trying to accomplish, and how they go about it. Describe whether or not they are successful in achieving them.
- Pay attention to emotive responses such as frustration and satisfaction. Describe explicitly what causes them.
4 Improve Process
- Capturing observations and insights in real time is crucial. Don’t rely on your memory for anything.
- Take good notes, even if you are recording the observations. Recordings take just as long, often longer to rewatch and they can fail to capture the nuance in the moment.
- Date your observations for version control. Comparing sessions often provides insights not gleaned by reviewing in isolation.
5 Disseminate Findings
- Compile debriefing notes as soon as possible after observations while they are still fresh in your mind.
- Debriefing notes become an important source of preliminary findings. Preliminary findings can form the basis of hypotheses for future development.
- Share your preliminary findings as soon as they are compiled, as broadly as possible. Many teams don’t get access to the people they are developing products for.
Read Product Led Organisations Redefine Research to learn more about incorporating different research methods into your customer discovery.