Start thinking differently in the everyday
Any questions? Contact us
Summary
As we work to build innovation skills capability, it is tempting to point to an output (sometimes a big idea that finds traction within an organisation) as clarification that what we have been doing from an innovation perspective has worked, has been useful, has made a difference.
Learning Experience Designer at Ninety. Tim is responsible for ensuring that learning with Ninety is effortless and engaging, so that individuals and organisations can realise their development goals.
But experimentation and failure are integral (and important) parts of the innovation process – sometimes we never get those big ideas, and even when we do, the development journey is frequently a slow one. The path to successful innovation is necessarily non-linear, as we test concepts and redefine our hypotheses. But this iterative process – radically different from the traditional project management approach of deciding to do something and then going ahead and implementing it – is different to business as usual. This can create tension and even cause internal stakeholders to lose faith in innovation, making it harder to pursue new ideas.
So how do we answer the question: “Did this work?” or perhaps more pertinently, “Is this working?” and how might we support our internal stakeholders who are more used to seeing us hit (or miss) linear milestones in a project plan?
One way is to make use of an interim analysis – a concept that we’re borrowing from the field of large-scale scientific studies. There, this is the technical term for essentially “checking in on your results as you work through the experiment”. For example, stopping a three-year clinical trial at six months if it is apparent that what is being tested does more harm than good within that time. In this context, the impetus is to check on essentially the same outputs as we’d be checking at the end of the experiment, then using statistics to determine what happens if we continue.
We might do the same, but with ideas. What we are looking for are measurable things that might serve as realistic predictors of overall success. The simplest of these is a check on the number of ideas generated. As we work through the ideation process, the number of ideas generated is a simple way to assess whether that process is working effectively. We will likely want to twin this with an analysis of the health of those ideas, but can still think in terms of volume; asking, for example, how many are built out throughout the following stages. It is important to return here to the notion of failing an idea, too. In an innovation context, failing an idea means that we have successfully evaluated the opportunity and decided that it is not the right one for us to pursue. Tracking the number of ideas failed therefore provides us with a metric of progress made towards finding a valuable solution, as well as an understanding of how we have de-risked future ideas, since we have developed a better idea of what will work. As we like to say at Ninety, innovation is also about choosing what not to do.
By identifying the things we can measure at this interim analysis stage, we are relying on a suite of measures to build a picture of the impact of the work we’re doing as it progresses – rather than relying on the success of that one big idea towards the end of the process. In fact, we might conclude that based on the work we have done that the area we initially chose to pursue is not the right one for our organisation. Collecting these interim measures allows us to ‘show our working’ and provide an evidence-based approach to decision making. The more we do this, the better we become at identifying the best predictors for our organisation and its way of working. There is an inherent logic to the statement that ‘more ideas at the ideation stage leads to a higher chance of successful ideas emerging’, but it says nothing about what may be perceived as less tangible aspects of bringing an idea to fruition, like collaboration or mindset.
But these intangibles are actually (probably) measurable. Collaboration may be understood within a simple metric of a count of messages shared within a team, or meetings scheduled and attended. We might find that innovation teams that share and respond to more messages are more likely to develop successful ideas. In reality, this might be a terrible indicator of collaboration – but until we start to measure it, we simply don’t know.
Our High Intensity Innovation Programme (part of our Gym Suite) is undertaken through a learning platform that provides us with access to data about the amount of time spent within the platform by each individual and team. Analysis of that data shows that there is a strong correlation between time spent in the platform and success at the final ‘dragon’s den’ element of the programme. Time spent in the platform tells us nothing about the quality of that time (a participant could just log in and leave the system whilst they have lunch), but the measure seems to be a good enough indicator to be useful for interim analysis.
For each stage of the innovation journey, there is likely to be a similar measure that we can use to gauge success within that stage, and make the argument that it is contributing to our overall progress.
Talk to us as you begin your innovation skills journey about what these measures might be for your organisation and context, and we’ll help you to build them into your overall understanding of progress.