I like paradoxes. I even like the word “paradox”. It’s just so delicious – things that taken apart must be true but taken together cannot be true. So like life.
There is a lot that is paradoxical about innovation – especially about new product development. One particular paradox I’d like to talk about here has to do with “managing” innovation. The problem is this: the nature of innovation is essentially unpredictable, but corporations have a legitimate need to manage it and harness it. The resulting tension has generated tons of ink, digital and otherwise.
In my twenty years working in corporate technology research and with InContext’s clients, I’ve seen two radically different approaches to dealing with this dilemma, representing each half of the paradox.
The first approach is to treat innovation no differently than any other business operation – as a deterministic process to be controlled, measured and predicted. There is a lot of pressure on companies, especially in today’s environment, to get the most out of every dollar invested. And to be sure, operational management techniques have greatly improved. IT innovation has allowed corporations to streamline operations, slashing costs and giving management more real-time and finer-grained monitoring and control than ever before.
A lot of this is rooted in a management mental model of business operations as a big machine. Each part is interconnected, each part can be monitored, each part can be replaced if defective. From Frederick Taylor on down, this model has pretty much been the dominant way to think about business. And in some cases it works well. Supply chain and production have been revolutionized by mathematical modeling, for example. The machine is deterministic, or at least stochastically predictable.
The problem is that new technology innovation and new product development is much messier than neat mathematical models lead us to believe – even stochastic ones. One big problem is that new technologies often are developed years ahead of the products that eventually commercialize them, and in ways that are completely different than the innovators expected. Or products get used in unintended ways that are big hits. In my prior technology research management role, I remember trying to balance my portfolio of research investments using the Black-Scholes option-pricing model that was all the rage for a time. The model required “estimates” of inputs – market sizes, probability of commercial success (of what, I wondered), commercialization costs – that were just plain unknowable at the time. And the outputs were extraordinarily sensitive to some of these assumptions.
The completely opposite school of thought is the “genius” school. In this line of thinking, no amount of analysis is going to help, so go with the smartest people you can find and do what they say. Lock them in a room and make a big bet on what they want to do. Ignore the customers and charge ahead. The engineering mythologies of the skunkworks and the lone genius fit here, and the seduction for companies is that sometimes this works. Sometimes lightning does strike, and when it does, it feels heroic and it makes great press. The RAZR story, although somewhat mythologized itself in its retellings, is a well-documented example. But it’s also a cautionary tale, as Motorola was never able to make this kind of innovation sustainable. Scott Berkun, in his great book The Myths of Innovation, points out just how unsustainable, and how rare, this model actually is.
The right model to me is somewhere in the middle – more Buddha than Black-Scholes.
Paradoxically, if you try to control innovation too closely, it slips away. Rigid modeling and strict stage-gate processes both work against real creativity and innovation. To maximize your innovation effectiveness, you have to learn to give up your need to control too tightly. You cannot be absolutely certain of innovative outcomes – you need to learn to live with the chaotic, random process that is creativity. But learning to live with chaos and randomness doesn’t mean that innovation is unmanageable. You just have to learn to stack the odds in your favor.
Some of these odds-boosters I’ve blogged about before – cross-functional, cross-organizational teams, and having a repeatable, executable process for gathering and analyzing customer data. But there’s one more practice that’s important as well – a culture of building and testing, playing and prototyping rather than analyzing and predicting. Michael Schrage calls discovery through prototyping “Serious Play”, and says you can tell a lot about the creativity of a company by how it manages its prototypes. The more culturally “mainstream” playing is, the more creative the company – and the more competitive its offerings.
Early, iterative prototyping and testing with customers is a big part of what we do with clients every day. We’re exploring concepts, bouncing them off of users and redesigning them together on the spot. It’s a little chaotic, and sometimes unnerving. But by letting go of our need to design too much up front, we actually get better ideas, and by the time we’ve gone through 2 or 3 rounds of this, the real design concept emerges.
Now, if you’ve followed me this far, you’ll appreciate this final little bit of paradoxical deliciousness. It’s easy to think, and some authors advocate, that just being “free” and playing is enough to be innovative. Instead, I assert that it’s a structured design framework that works best – one that incorporates a willingness to think freely and play with prototyped solution possibilities. Infuse this process with a deep understanding of users, your business focus and technical capabilities and you’re pretty much guaranteed to come up with something useful, practical and implementable.
So dealing with the “fuzzy front end” takes a little Zen. Stop trying to predict up front where your next billion dollar idea is coming from – and build. Build, play and learn – and watch the winning designs emerge.