Running Contextual Design Research for Service Strategy

One of the first places to start when you want to improve your service is to look to your customer feedback and data. But what do you do when no one in the room understands how people are using your service, and the data you have is limited to high-level usage analytics? This is the perfect time to introduce ethnographic research into the mix to better understand how your service is perceived, used, and experienced.

I recently completed a study aimed at understanding how groups across the Stanford campus are using document storage and collaboration services, such as Box and Google Drive to get their work done. We wanted to better understand the use of these tools and identify behavioral patterns, use cases, pain points, benefits, and service perception in order to craft a recommendation for the future of services offered in this category.

In this post, I wanted to share my process for running this research—how I went from one step to the next, what tools I used and why, and the realities of what the day-to-day looks like running a research project of this kind—in the hopes that it might be helpful for those considering running similar research.

Since this is a long post, here’s what I am covering below:

  1. Figure out what you want to learn
  2. Decide what method to use
  3. Find the right participants
  4. Schedule out the interviews
  5. Decide on a framework for analysis
  6. Prepare for the interviews
  7. Run the interviews
  8. Process the findings
  9. Identify themes
  10. Synthesize findings to generate insight
  11. Report findings and insights

1) Figure out what you want to learn

Since our goal was to inform a strategic recommendation on the direction of services offered to campus, I wanted to have the research supplement the macro-level analytics data we had available and provide specific insight into critical features and functionality, behavior and usage patterns, and service perception.

I think one of the hardest steps in design research of any kind is narrowing in on what exactly you are trying to learn. In the case of this project, the goal was very broad, because there was no pre-existing understanding of how these services were being used across campus. So, in some ways, this research would be foundational, initial research that might help guide future efforts. Even though our goals were broad, there were specific learning outcomes we wanted from the research.

We were focused on observing the following behavior:

  • How people collaborate on documents
  • How people store documents
  • How people share documents

I also had a set of top questions that we were interested in answering:

  • What primary tasks are they trying to accomplish?
  • What tools do they use, and how do they integrate them?
  • Where do people need to create workarounds to accomplish their tasks?
  • How valuable are Stanford-specific integrations, privacy, and other key features?
  • What are make-or-break features or functionality that would force them to use a non-Stanford offering?
  • If they are using a tool not offered by Stanford, why?
  • What made them decide to use a particular tool over another for a certain task?

These informed the choice of research method as well as the facilitator script and question prompts given to participants.

In summary: Start broad in an opportunity space, then narrow down to a specific hypothesis you want to investigate, validate, or refute in relation to the overall project objective. In my case, I narrowed in on a set of behaviors to observe and questions to answer to help inform a strategic service direction.

2) Decide what method to use

Once I had defined the focus of the research, it was time to decide on a methodology or format for the study itself. Given that I was on a short time-frame (3 weeks!), and that we were interested in observing user behavior, I decided to go with a “light-weight” contextual inquiry format for the research.

For those of you new to contextual inquiry, the idea is you observe your participant in their “native habitat” and ask them to show you how they conduct day-to-day activities. You use that observation to also prompt specific inquiry into certain behaviors in order to uncover insights. Because my goals were broad and I had a range of questions and behavior I wanted to probe into, I felt observational inquiry to be the best approach.

So I decided to target a diverse set of groups across campus and run individual 1-hour interviews where we would go to their workspace, and ask them to show us how they use document storage and collaboration tools. But since I had such limited time and could only fit about 10 interviews into my research, I had to make sure I was getting exactly the right people to participate.

In summary: Balance the time you have available vs. the depth of data you need. Use your hypotheses to determine what is most important to focus on and what method would best support your inquiry. For me, I wanted to go deep in each interview to understand nuance, and get a view that was not currently understood, thus the choice of contextual inquiry.

3) Find the right participants

But how do you find exactly the right people? The first step is to clearly articulate what kind of person you are looking for. This is where you should create a participant profile. This profile should be just enough detail to help guide in the identification process of interview candidates.

Here’s what I came up with as a participant profile:

  • Staff member who is deeply using these tools in day-to-day work
  • Has awareness and involvement in how the overall team uses these tools (often means they are a project lead, manager, or assistant director)
  • Ideally has some say or was involved in the decision making process around which tool(s) to use for what
  • Represents a segment of a diverse set of use cases

This set of qualities helped me communicate to my colleagues what kinds of participants we were looking for. We then reached out to senior leadership across campus in order to get a broad representation of diverse segments (finance, compliance, research, admissions, education, communications, etc.), and asked them to suggest staff that worked in their various offices who might fit this profile.

I admit, this part of the process is incredibly strenuous and tedious, as it is challenging to stay on top of email communications to keep track of all leads and possible candidates, and coordinate follow-up. To stay organized, I created a spreadsheet to help with fielding possible candidates, and a Google form to collect final candidate profile information in advance of the interview.

My Google form for collecting participant data
My Google form for collecting participant data

Then it was just a matter of emailing and calling candidates to confirm and schedule the hour-long interviews.

In summary: Create a solid participant profile to narrow in on the people who have a job to be done that involves what you’re trying to research. For me, having a clear participant profile helped me conduct fewer interviews to accommodate my shorter timeframe.

4) Schedule out the interviews

Since this project was “in-house” inasmuch as contained within Stanford, I used our central calendaring service to send invites to participates directly through their work calendars. For some I had to call or email to schedule times that worked.

The bigger challenge was that I wanted to have both myself as facilitator but also a second notetaker at each interview. It is very important for any observational research to have an additional notetaker in order to capture all the insights from the interview. So I leaned on a few members of my team and also opened up a shadowing opportunity to our Stanford Service Design Community of Practice group, and ended up having four different notetaker volunteers who I was juggling calendars with in order to schedule the interviews.

Phew! It was a lot of hard work, but I was able to schedule all 10 interviews over a period of a week and a half, and carefully worked in travel time between each interview (don’t forget travel time!).

In summary: Optimize your research plan to capture as many inquiries in the time provided. Include a notetaker to capture the notes and unspoken responses when observing the participants for both their answers and behaviors, as facilitation may require you to take less detailed notes.

5) Decide on a framework for analysis

Now that the interviews were lined up, I focused on preparing my interview team for the task. To do this, I wanted to decide ahead of time on the framework by which we would be analyzing and processing the findings. Since we were focused on broad insight to inform service strategy recommendations (which really has a lot to do with defining the business value of a service direction), I decided to use the Strategyzer Value Proposition Canvas framework of pains, gains, jobs to guide the research.

One of the reasons I love this framework is that it is aimed at uncovering value—value in what is currently being provided, value in addressing current user pain, and value in helping people get stuff done. I think this is an appropriate framework for research that is more open-ended, and thus I chose it for this project. There are other frameworks (such as journey mapping: doing, thinking, feeling, touchpoints, etc.) that you might consider depending on your goals.

I used these three categories to guide preparing for interview facilitation as well as post-interview analysis:

  • Pains: What are people’s primary pain points?
  • Gains: What are people in love with?
  • Jobs: What they trying to accomplish?
In Summary: Find a framework for analyzing and codifying your results into a consumable format. Make sure your framework is geared towards answering your hypotheses.

6) Prepare for the interviews

To prepare for the interviews, I did the following:

  • I created a facilitator notes “cheat sheet” (below) outlining the goal of the research, the framework for findings, and the key questions we were trying to answer
My facilitator notes "cheat-sheet" that I printed to have on-hand during the interviews
My facilitator notes “cheat-sheet” that I printed to have on-hand during the interviews
  • I created a facilitator script (below) that would assist me in opening and setting the stage for each interview in a consistent way (though I admit, I mostly just used it as a reference and ad-libbed)
My facilitator script that I used to open each interview
My facilitator script that I used to open each interview
  • I created a Google doc for notetaking in real time during the interviews.
  • I met with each notetaker for 30 minutes to debrief them on the goals, process, and questions we were after so they felt prepared.
  • I also scheduled a findings debrief workshop with all notetakers for after the interviews were over, to make sure we did analysis while the interviews were fresh.
  • I blocked time in my calendar to process interview notes shortly after each day of interviews.
In summary: Plan ahead for all aspects of your interviews, including how to help your notetakers feel prepared, repositories for storing the notes and data you are collecting, and time management to allow for adequate processing of the notes.

7) Run the interviews

It’s show time! Here’s how each interview went down.

  • First, a warm and fuzzy greeting to make them feel comfortable. It helps to open with a little small talk, show interest in what their role and organization is like, and make introductions.
  • Next I run briefly through the opening facilitator script, telling them the purpose of the research and what to expect during the interview. This includes getting their permission to audio-record and picture-taking for notetaking purposes.
  • Once I get their OK, we dive in. I first asked them to tell me about some common ways they are using document storage and collaboration tools in their office—common, day-to-day or ongoing tasks.
  • Once they share a few examples, I ask them to show me how they do those tasks.
  • As we are observing, I prompt the candidate to: explain why they are doing something a certain way, talk about pain points or valued functionality, and run through my list of questions to make sure we are covering all desired points.
  • Then I thank them for their time, and send them a follow-up email thank you within the next day.

For contextual inquiry, you are focusing on observe participants doing what they would normally do, so I focused on getting them to show me what they were talking about. This requires prompting them regularly to remind them to show in addition to tell. As a facilitator, also be sure to avoid yes/no questions or questions that are slanted towards a certain outcome—use open ended inquiries and let the participants react without direction.

In summary: Plan your time well with each participant, be consistent, and don’t forget to help them feel comfortable and explain the purpose of the research. Make sure you are prompting and not “leading” in your questioning.

8) Process the findings

After each interview, there are a bunch of raw notes compiled in a Google doc. I decided to do two things to process the data:

  • Create 1-sheet summaries for each participant so that they could be easily revisited at a later date if needed
  • Convert raw notes into bite-sized findings, categorized by my framework: pains, gains, jobs

It took about 45min to an hour to process notes from each participant. Here is an example of the summary document I created:

The one-page summary doc summarizing findings from a single participant
The one-page summary doc summarizing findings from a single participant

And here is what my Google spreadsheet of findings looked like:

The spreadsheet I made to catalog research findings
The spreadsheet I made to catalog research findings

At the point of the findings spreadsheet, I gave a unique ID to each participant in order to anonymize the findings but be able to track back to participants if needed.

In summary: Use a standardized format to collate data and organize results into a consistent format to be analyzed. Turn observations into data through chunking and categorization.

9) Identify themes

In order to further engage my interview team and identify insights in the findings, I decided to do a card sorting workshop. I invited all my notetakers as well as key stakeholders in the project to participate. I then took my findings spreadsheet and did a mail-merge into Microsoft Word label template to create cards for all my findings. I did this separately for each category (pain, gain, job) so that I could print each on their own color of paper. Here’s what the cards looked like:

Close-up of the printed cards
Close-up of the printed cards

I had about 300 cards, and asked everyone to first break into three teams. Each team got a stack of a single category (e.g. Pains). The goal was for them to create groupings within the category so that we could see commonalities/trends in the findings.

Card sorting in progress
Card sorting in progress

We then documented that card sort by taking photos of the cards with my phone, and writing the top-level categories on the whiteboard.

Capturing categories from card sorting on the whiteboard
Capturing categories from card sorting on the whiteboard

The next sort I had them do was by feature or functionality category (these came from the first card sort). I had everyone hunt down cards that were relevant to a particular feature, and then sort them by tool (Box, Google, OneDrive, etc.). We then also documented these sortings as well.

An example "bucket" of sorted cards, organized by tool
An example “bucket” of sorted cards, organized by tool

At this point, we could now clearly see where one tool was better performing than another for a specific, important functionality noted by users.

Because of my tight timeline, the photos are the only permanent documentation of this card sort activity, and informed what I ended up typing up as my research findings, but I didn’t take the time to go back into the spreadsheet and add the categories (at least not yet).

In summary: Co-create “finding the findings” to build empathy in your research team, and further boil down the overarching themes that will be used to support or refute your hypotheses on a macro level.

10) Synthesize findings to generate insight

Now for the good stuff, and the hard work. Honestly, this takes dedicating thought power and time towards looking holistically and deeply at the findings, the broad patterns you see across the data, and interpreting findings into meaningful insights. I ended up with the following:

  • Summarizing top pain points, benefit, and service perceptions
  • Defining emerging personas
  • Documenting top use cases with examples and required tool functionality to support the use case
  • Framing key insights that might impact service strategy
  • Recommending next steps for continued research, specific service improvements, and service strategy

I think there is a benefit to having a single, primary researcher who participates in all interviews and is able to track threads of insight across the entire research project.

In summary: Take everything you’ve gathered so far, and push it from data to insights, producing the output you will need to quickly and clearly convey your findings to your partners and stakeholders and directly relate to the goal of the project.

11) Report findings and insights

Once I had a big brain dump of my findings, insights, and recommendations, I created a long findings report with detail, and a 3-page summary report that could highlight the key learnings. This smaller report was used as material circulated more broadly in the organization to influence decision-making for service strategy.

In summary: Create materials that can be consumed as well as detailed reports for posterity.

A Bow on Top

Well, that’s it! I think it’s important to document and share process, and I hope this post has been useful to you if you are considering or in the planning phases of running this kind of research. I would love to hear from you!

The essential guide to
Practical Service Blueprinting

Download the free guide! Check out everything we're teaching and talking about and stay in touch for more updates, tools, and methods. Enter your email to be sent a link to download right away!