top of page
  • Thomas Thurston

Measuring Innovation: The Blind Spot

Updated: Apr 3

I was recently at an event where executives from the world’s biggest companies spent hours griping about innovation.  They knew innovation was important, but bemoaned its aversion to hard metrics.  In other words, everyone wanted to better measure, quantify and improve their innovation outputs, but nobody knew how.


Some folks believe innovation can’t, or shouldn’t, be measured.  They say things like “you can’t measure creativity.”  They’re right, in some utopian philosophical sense, but disconnected from the eventual realities of resource allocation inside for-profit businesses.  Others think innovation can’t be measured, but that too is more of a recreational thought exercise than a useful working paradigm. Pragmatists know they get what they measure, so what should they measure, and how should they measure it?


How should we measure innovation?


This might be the oldest question in the history of “innovation” as a field of study.  Moreover, the dominant approach – one that focuses on measuring inputs rather than outputs – has kneecapped the domain’s progress for generations.


This issue was captured in the 1967 “Charpie Report” by the US Deparment of Commerce, and by contemporary US Organization for Economic Co-operation and Development (OECD) studies.[i]   These studies were early efforts to analyze innovation at the national level using statistical and empirical methodologies, and explicitly tried to measure outputs (ex. product launches, economic impacts of those launches, etc.).


While innovation outputs (versus inputs) were identified as the ideal units of measurement, the studies ran into three obstacles:

  1. Limited and biased samples regarding innovation output data;

  2. Difficulty assessing the relative impact of innovation outputs; and

  3. Difficulty of identifying an innovation’s country of origin.[ii]

The Charpie Report attempted to solve these problems by shifting the focus from measuring outputs, to measuring inputs instead (ex. R&D spending, patents filed, manufacturing volumes and expenses, etc.).[iii]  It was simply easier to define and count inputs rather than outputs.


The legacy of this shift has haunted innovation ever since.  Subsequent studies almost universally accepted the new input-oriented focus, which has dominated public and private research.  For example, a 2001 National Science Foundation workshop on innovation reported “participants generally used the term [innovation] in a way that focused on the processes and mechanisms for producing commercial applications of new knowledge rather than on the products or outputs from these processes.”[iv]


Innovation managers typically focus on inputs

They count how many ideas are received from ideation efforts, how many concepts are filtered out at different checkpoints, how they rate the technological risk, how they rate the team, the criteria used to move a concept forward or to discontinue it, dollars spent, employees invested, patents filed, customers spoken with, etc.


Measuring inputs is well and good, but what about outputs?  If you’re an innovation manager at a Fortune 5,000 firm, ask yourself:  over the past 5 years, what percentage of your company’s funded innovation efforts have been discontinued?


Odds are, you have no idea.  Your boss has no idea.  Your employees have no idea.  Your shareholders have no idea.  You may be personally familiar with a few dozen projects, but what’s happening company-wide?  If you don’t even know your company’s innovation hit rate, how can you know if it’s getting better or worse? Without measuring outputs, nobody knows.  As you read these words, now may be the first time in your career when you’ve realized how little rigor your organization gives to outputs.


Everyone assumes someone “else” is keeping track of outputs, be it someone in the bowels of finance, accounting, or in the executive suite.  Yet the awkward truth is, the person usually doesn’t exist or their information is too buried to ever see sunlight and impact decision making.


This is a deep, hidden, problem because science can’t progress without a feedback loop.  The most stripped down definition of science itself is:  try something (input), watch and learn (output), repeat.  The inability to take outputs seriously is fundamental to why innovations fail 70% – 90% of the time; a statistic that hasn’t changed in a century.


There are many reasons outputs aren’t properly understood when it comes to corporate innovation. 

“Success” can be controversial to define and measure explicitly – there are rival philosophical factions.  People resist defining “innovation,” “success” or even “results” in ways that undermine their agendas or job security.  It isn’t anyone’s job to track the comprehensive outputs of innovation efforts inside big companies. 


Corporate innovation efforts and incubators/accelerators come and go over the generations, leaving lessons learned to scatter with the winds of time.  Different groups, divisions or cost centers sponsor innovations without any disciplined processes or repositories for output data collection.  Perhaps most conspicuously, managers are quick to bury their dead.  Everyone wants to trot out their few wins while sweeping losses under the carpet.


 It’s time to get real about the blind spot.

Executives in 2015 are having the same circular debates the US government did in the 1960s.  Meanwhile innovations still fail just as often, despite all the things companies think they’re learning.  Businesses, and their employees, are also coming to the same conclusions as the 1960s – measuring inputs is easy and immediate, measuring outputs is hard and outside the timeframe most folks are usually concerned with. 


Blind spot

The problem is, it isn’t working.  Most innovations fail, and if this is to ever change we have to get past esoteric circular debates, reluctance to compile data (all of it, not just the few success stories) and we must stop shrinking away from he core problem.  We won’t see further until we can at least acknowledge our blind spot.


 

[i] “Charpie Report” US Department of Commerce (1967), Technological Innovation: Its Environment and Management, USGPO, Washington; See also OECD (1970), Gaps in Technology; Comparisons Between member countries in Education, R&D, Technological Innovation, International Economic Exchanges, Paris, pp. 183-184; See also Godin, The Rise of Innovation Surveys: Measuring a Fuzzy Concept, Project on the History and Sociology of STI Statistics, Working Paper No. 16.


[ii] See Godin, The Rise of Innovation Surveys: Measuring a Fuzzy Concept, Project on the History and Sociology of STI Statistics, Working Paper No. 16.


[iii] See Godin, The Rise of Innovation Surveys: Measuring a Fuzzy Concept, Project on the History and Sociology of STI Statistics, Working Paper No. 16.


[iv] EV Larson & IT Brahmakulam (2001), Building a New Foundation for Innovation: Results of a Workshop for the NSF, Santa Monica: RAND, p. xii.

bottom of page