Ok so it’s time for another post in my ongoing series (see my last post) on our efforts to revamp the top level of our organization’s website. Today, I’m moving from theory into practice: here’s the steps we are following to wrangle our online tasks into topics. essentially this is how we are going to determine what our landing pages will be.
1. Task identification. We combed through our websites, our org’s service inventory and our Program Activity Architecture to identify tasks that could be accomplished online.
Through hours of slogging, we uncovered over 100 tasks, which we compiled in a big inventory spreadsheet.
This list is likely incomplete, but I’m hopeful that it covers enough breadth to be representative.
We did not prioritize tasks through this process — more on that will come later.
2. User-led categorization. Next, we fed the tasks spreadsheet to our consultants for conducting user research. They recruited and interviewed 13 users based on audience criteria that we already had in hand. In these interviews, our consultants asked participants how they’d group these tasks into categories, and what they’d call the groups they created — in other words, an open card sort.
3. Analysis of the research results. The consultants then analyzed the results of the card sorting interviews and came up with a preliminary list of 25 categories across our main user groups (this was somewhat expected, as we have a wide-ranging mandate). Important point here: since this was an open sort, the names in this list were derived from the words used as category labels from the research participants themselves. This means, they are a key indicator of our clients’ mental models and the language they use when encountering our content. Similar to what Gerry McGovern calls “customer carewords.”
4. Refinement. We felt that 25 different topics was a tad high (e.g. it would form an overly long list if used as a search facet), so we worked with the research data to further refine this list down to 12 categories. We are currently working with our internal stakeholders to validate our work.
5. Tagging. Through the evolution of our topics listing, we’ve been updating our tasks spreadsheet to ensure our topic labels are assigned properly to each task. Aka tagging. Some tasks fall under more than one topic, as they should. For this tagging exercise, we’ve drawn again on the data from the card sorting interviews, but internal stakeholder feedback and editorial judgement both play a role as well.
For now our tagging by topic still lives only on our big tasks spreadsheet, but once we implement in our CMS, we hope to be able to apply in a variety of ways — not only for our landing pages, but also for various forms of search and browse navigation.
But we’re not done yet.
1. Within these categories we need to prioritize the top tasks that will be surfaced on our top level landing pages. We’ll look at web traffic, client surveys, call centre data and internal stakeholder feedback to make that determination.
2. Further user testing is required. We need to plug these topics into the navigation prototypes that we are building and field test them (again with honest to goodness clients from outside the firewall) to see how they function in practice. We fully expect further tweaks.
It’s not a perfect process, but it looks like we’re on our way to a topic-based categorization for our org’s tasks that can be completed online.
We’re hopeful that this means when it comes to actually creating our landing pages, we will be able to present our key tasks in ways that make sense for our clients.