Skip to main content

https://services.blog.gov.uk/2022/11/29/the-value-of-testing-riskiest-assumptions/

The value of testing riskiest assumptions

Posted by: , Posted on: - Categories: Service design, Tools, methods and techniques

A laptop screen on a sunny desk with an out of focus spreadsheet we used to capture everyone’s assumptions.

In a previous blog post, I described how to prioritise the riskiest assumptions in big problem spaces. This blog post shows why we should prioritise assumptions by their risk and gives some examples of when it’s been valuable.

Why other prioritisation methods struggle to bring focus to exploration

Prioritising our assumptions by risk can demand a lot from a team. We’re not prioritising a backlog we already have. We’re asking our team to write assumption statements from scratch. Why should we spend this extra time on prioritisation?

I’ve used other prioritisation methods like MoSCoW, impact vs. effort, or RICE. These are good for prioritising when we're quite confident about the solution and want to make improvements. This might be a prioritisation of components, features, user stories or products.

When uncertainty was higher, I’ve suggested we prioritise hypotheses by impact and effort. That was quite useful to frame things as experiments to prioritise. Yet our hypotheses were always too centred on features.

Risk of failure doesn’t just come from focusing on low value, high effort features. It also comes from limited trust between people, slow feedback loops, or slow delivery; to mention a few causes. None of these featured in our prioritisation methods. We weren’t focusing enough on the things that were most likely to cause us to fail. When this happens, explorations lose focus on value. They drag on and become failed investments.

How riskiest assumptions provide focus for teams working in big problem spaces

Using riskiest assumptions gives us a way of comparing risks, like for like, across all areas of our work. In an uncertain problem space, riskiest assumptions are most likely to lose us our bets. When we spend more time testing our riskiest assumptions, we reduce uncertainty. We also reduce the cost of failure, as we learn the big things that don’t work early on. That's much better than finding out later in development, or after launch.

I’ve tried prioritising assumptions by risk myself. I found it much more valuable when we open up the evidence behind them to build on and critique. Make things open, it makes things better.

When you score assumptions as a team it:

  • increases the shared understanding of the problem space
  • draws out imbalances in knowledge
  • brings out interesting discussions about people’s fears — this creates greater psychological safety
  • results in a team that cares about the work they’ve prioritised

It’s also the most fun I’ve had prioritising.

When not to use riskiest assumptions testing

This has less value when we’re working in a much smaller problem space. Here, we should have greater certainty about what the solutions might be. This could include refining parts of an existing service in the public beta or live phase. By this stage, the value of our service should be clear. The team should already have a common understanding of the problems and opportunities. Other prioritisation methods are more useful here, and less time-consuming.

When it’s been valuable

When someone has a big idea

One team I worked with were trying to confirm the need for a particular solution for businesses that could be reused across government. It was a bit like a hammer looking for nails. We should avoid coming up with solutions before we understand users and their needs.

To create a safe space for discussion, I asked a senior stakeholder for a one-to-one to talk about how we might get the most value out of the discovery. To prepare, I prioritised some of their riskiest assumptions. I wanted to give them credit for the good work they had done so far. To do this, I wrote down “What we know so far…” next to each assumption.

When we met, the sponsor helped fill in some gaps in my knowledge about why they had those assumptions. I went in open-minded about their feedback. In turn, this helped them to receive my feedback well. After a positive meeting, they agreed to broaden the scope of the discovery. They were happy to approach it with less focus on a specific solution.

The expanded discovery found that there wasn’t a clear user need for the proposed solution. As a result of talking about their riskiest assumptions, they paused to reconsider the value of continuing. The discovery also uncovered some real problems businesses face, which they shared with their colleagues.

When the risks aren’t obvious

The Department for Education was looking at helping adults get more secure jobs if they were at risk of losing their jobs to automation. They were testing different approaches to retraining people.

Nasreen Nazir and Georgina Watts facilitated theory of change sessions with their product managers every quarter to understand which potential services and products to prioritise building next. They started by placing user needs and known problems on the left, and outcomes they would like to get to on the right, before suggesting potential solutions in the middle that could help the users get from their current needs to achieving those outcomes. But potential solutions often have assumptions, which the riskiest assumptions approach can draw out.

For example, where they were previously discussing solutions like, “we could build users a portal to find local jobs,”  they can now tease out more detailed questions like, “is it realistic to expect Local Enterprise Partnerships to upload job vacancies into a portal?” Drawing out the riskiest assumptions helped to sense check which services and products to prioritise and which to leave for later.

When an alpha phase feels too big

We were trying to make it easier to buy products and services for the public sector. To do so, at the end of our discovery, I facilitated the prioritisation of our riskiest assumptions.

We sat together to reach a consensus score for each assumption, looking at impact and confidence. It was fun watching when people thought a particular assumption should be a 9 on confidence, when others thought it should be a 3. People said things like, “Remember that user research session? The users never read all that text.” It was also great to see new members of the team challenging our assumptions from a fresh perspective. Going forwards, we were all referring to the same knowledge.

We wanted to check these scores with our stakeholders, to improve the accuracy of our prioritisation. We plotted the assumptions on a 2x2 grid, using impact and confidence. It showed stakeholders where we were most likely to start our alpha phase.

We used the prioritised riskiest assumptions to inform our alpha phase backlog. We knew what we most needed to learn about. Testing these assumptions helped us rule out certain approaches to improving our services. This stopped us investing lots of time into something, rather than finding out much later that it's not valuable.

How have you found testing riskiest assumptions?

I’m still learning about this method and tweak it each time I use it. I’d be curious to know how other people are finding it? Please leave a comment if you want to share your reflections.

Sharing and comments

Share this page