If I told you that contemporary ideas about innovation and disruption were driven mostly by ideology, ignorance, and marketing hype, would that seem controversial or extreme?
If my proposition were true, though, would it change the way you think about innovation?
It changed for me. I came to my conclusions over the Christmas-New year break, when my reading list contained an unusually dense stack of essays and articles about innovation. The common features were about undefined buzzwords, and formulaic models that avoided coming to grips with innovation, often missing even of a workable definition.
So let’s start there.
Innovation and disruption
Relying on a common dictionary definition of innovation isn’t enough anymore. The word has come to take on many more meaning than just change.
This is partly due to Silicon Valley myth-making. The technology giants have become the stuff of legend rather than the subject of rigorous analysis.
In this mythification, though, the meaning of the word ‘innovation’ has changed from its ordinary reference to change or reconfiguration to something more. It now seems to be always a positive form of change, and to inevitably involve technology.
The towering profitability of a handful of Silicon Valley firms, and their analogues elsewhere, has created an inference that the mythical innovation associated with them, confers on the word ‘innovation’ an inference of success.
And so innovation has become an organisational imperative without which, it would seem, extinction looms, as it did for the dinosaurs 60 million years ago.
Moreover, innovation has now become an almost compulsory quest, like a crusade, whose holy grail is ‘disruption’ – paradigm-changing innovation. Disruption is the black belt of innovation, revered and reached for as the ultimate end for which organisations should strive.
And so innovation has become a glorious vision of using technology to discover and fulfil ever newer customer needs and wants.
The new meaning, however, may not be aligned very closely with any reality.
Much of what is written about innovation is marketing hype. Change is just change. Innovation has always occurred, and not just in technology-focused organisations. Mostly it is realised as incremental improvement of existing processes, products and services. Once in a while there is indeed a paradigm shift. But not very often, including in Silicon Valley.
To understand the hype aspect better, I found it useful to think of it in terms of the Gartner Hype Cycle model, designed by the consulting giant to describe organisational technology adoption (see Figure 1, below). That cycle begins with the introduction of some new technology product or service, followed by high expectations for all the problems it will sweep away, and all the solutions it will offer, before reality sets in to reveal its limitations and defects.
The Gartner model calls for a positive outlook on coming back from disappointment with ‘enlightenment’, which I see more as a maturation process in which flaws are ironed out, and use cases are publicised to ease adoption. At some point a peak of organisational and user adoption is reached, which is followed by a slight decline in new market penetration, but what Gartner calls a ‘plateau of productivity’, but what I prefer to think of as a settled integration into a wider tool-set.
If this model were derived from contemporary essays and articles on innovation and disruption, we would still find ourselves in the initial steep climb of media and marketing hype about all the positives and benefits. In fact, the myth of innovation would never see it leave that upwards curve. That idea alone gives me pause about the rationality of innovation hype.
Designing, initiating, and managing change has been part of organisational behaviour for as long as homo sapiens has come together for common purposes. Innovation has driven change, and come about as a product of change. The advent of electronics and microchips may have speeded up the pace of technology change, but it has not, in any way, accelerated the Enlightenment path of addressing existential human concerns more rationally.
Hunger, homelessness, disease, and generational poverty remain unresolved concerns for the vast majority of the planetary population. Incremental increases in per capita wealth, or GDP merely mask the astounding concentration of privilege in ever smaller circles. If these are not concerns for you, it ought to be noted, too, that precious little innovation has occurred in technology organisations too. Microchips get smaller and faster. Gadget form-factors change. But the underlying technology is the same as it has been since the later 1980s.
Seen in that light, I have come to see the myth of technology-driven innovation as a perpetual reiteration of a marketing process to speed up planned obsolescence and consumer spending cycles. It is not really innovation we are pursuing so much as the ‘new’. A newer version of software. A newer version of smartphone or tablet. A newer way of storing data. A newer way of arranging the same components of work and reward. We have been convinced that we must buy the latest gadgets, software, and other fetishes of our innovative lifestyles.
In short, we must forever upgrade our Silicon Valley toys. That we don’t really need to, but spend the money anyway is an endorsement of the marketing hype as ingeniously well thought-out.
I argue the same is true for organisations. I use the term organisations rather than businesses to include not-for-profits and the public sector, which are no less affected by the innovation hype than profit-driven enterprises.
Organisations spend billions of dollars every year funding the only real beneficiaries of the innovation hype: telecommunications carriers providing online access, software companies exploiting state-developed internet infrastructure, and software and hardware manufacturers working in tandem to make things smaller and faster while managing to eliminate those benefits with feature-bloat, agile development methods delivering half-finished code, and ever more network overheads used for spying on user data and matching it with rising tides of online advertising or clandestine surveillance objectives.
Technology enthusiasts like to argue that innovation and disruption have made Silicon Valley firms and their analogues elsewhere fabulously wealthy. But the enthusiasts overlook the thousands of firms that did not succeed. That failed and collapsed despite using the same business models.
Nor do enthusiasts like to dwell on the unpalatable truth proposed by former Wall Street executive, consultant, and social media commentator, Richard Eskow: the Silicon Valley model is to talk up the valuation of a start-up company, exploit ‘the kinds of practices usually reserved for monopolies and monopsonies’, including loopholes like tax breaks, or by avoiding regulations that add cost to competitor prices, as is the case for Uber. Amazon and Uber, respectively, are examples of such practices.
The model includes using ‘newfound market share to a) bend government to your will wherever possible, b) screw down your suppliers’ prices, c) hit your customers with increased prices and/or new ads or other profit-making devices, and d) manipulate your customers without their knowledge. (See Uber, Amazon, Google, Facebook, et al.) This business model has directed much of the Valley’s efforts away from inventing genuinely creative new products – and toward the kinds of aggressive tactics that, as we’ve written before, would be very familiar to the Robber Barons of the 19th century. 
Eskow proposes that ‘sometimes ‘”disruption” is a euphemism, whose real meaning is “use tax loopholes to undercut law-abiding vendors” or “employ Robber Baron business practices to cut suppliers prices”.’
‘Sometimes it means nothing at all.’
Eskow also points to a propensity for Silicon Valley executives to adopt crime cartel practices, like suppressing wages and discouraging career progression for their employees; this strategy was pursued as an agreement between the CEOs of Apple, Dell, eBay, Google, Microsoft, and others.
All this to say that the success of the Silicon Valley model is actually less likely to arise from innovation and disruption than from questionable corporate business practices. That doesn’t mean no innovation occurs at all, just that the hype about innovation should be interpreted critically.
But while the hype prevails, few published commentators seem to have stopped to consider the return on investment for buyers of innovative or even disruptive technology.
Just consider for a moment what the return on investment has been for you personally in getting rid of a perfectly workable iPhone 5s to pay a steeper price for an iPhone 6s. What has changed in 18 to 24 months to make it necessary for you to own this only slightly altered configuration? If you bought a piece of furniture, an appliance, a car, or a house, would your upgrade cycle be as rapid, and if not, why not?
Organisationally, consider for a moment so-called productivity software. What program features – not the interface design, but actual functionality – has changed between the global industry standard word processing software, MS Word, between its Office ’97 and Office 365 incarnations? Almost nothing! Were the iterative changes that did occur worth the upgrade to Office 2000, 2013, and 365? Is the latest incarnation, with online collaboration built in, worth the risk of not being able to work at all if telecommunications infrastructure fails? Can you not collaborate online without that feature built into word processing software? Is that feature really more about Microsoft owning your intellectual property?
Now also consider the overhead of a vast training and support industry, explaining to even experienced users how to find functionality suddenly hidden away by interface re-design, or how to work around inexplicable new limitations, like a sudden decision to end backwards compatibility with older versions, or horizontal compatibility with other products.
My point is that the innovation hype value proposition seems to flow predominantly one way towards ‘technology’ vendors, with questionable value for money or returns on investment flowing back to consumers and organisations. I don’t argue there is never any value, but I do suggest that many individuals and organisations would be hard-pressed to provide sound evidence for it.
Seen against this backdrop, it is tempting to dismiss a lot of innovation hype as a juvenile obsession with the ‘newest’ even when yesterday’s version was more sturdy and workable. It is an ideology driven by the notion that ‘agility’ justifies rushing to market with products and services not yet fit for purpose, expecting unwitting consumers to do what used to be beta testing while paying full price. It is a foreshortened profit cycle based on deprecating an iteration almost as soon as it has actually become a reliably useful tool.
More than that, I would argue it is a manifestation of unseemly greed, unprofessional business practices, and, most worrying of all, gullibility among our professionals (including managers), who uncritically embrace the hype. The gullibility, in turn, seems to me to be driven by a techno-scientific ideology fostered in the academy and spread into almost all professions and technical specialisations.
We can settle for a definition of ideology here that focuses on a predictable range of ideas and responses. It arises from a common way, among acolytes of ideology, of interpreting the world, of acting on that interpretation, and of shaping expressed opinion to match. Ideology commonly replaces critical thinking and independent judgement to the extent that it provides ready-made answers and examples, or the formula for inventing them as needed.
The academy is supposed to be independent of ideologies, but has succumbed to the temptation to reach for ready-made formulae and models to facilitate the change in the 1980s of political economy. That change was to do with dismantling decades of state regulations designed to maintain civil business practices and restraints on crime cartel behaviours. The change also created an unhealthy reverence for profitability at any price. A consequence of that profit focus was displacement of the traditional social value of education by an aggressive new commercialism that reinterpreted the value of degrees solely as investments in speculative employment prospects.
Education, in this paradigm, came to be crowded out of courses in favour of vocational training in technical specialties. Degree credentials changed from evidencing education to merely certifying training in method, models, and techniques.
Academics in the liberal arts disciplines threatened by this crude profit focus reached for the respectability of scientific method where they could. Qualitative analysis, with its emphasis on critical thinking, has been replaced with pseudo-scientific models that depend for their credibility for quantitative metrics, even if these don’t make sense in many instances.
The continuing rise since the later 1980s of computer and internet technologies has given further impetus to an eclipse of education by training in numbers-based methods and models.
The upshot is that graduates from professional and technical degree courses overwhelmingly deal in known quanta and absolute answers. A scientific experiment is either conducted correctly or not. A circuit either works or does not. A stress-strain analysis for a type of material either makes it suitable for a specific application or not. An equation is solved correctly or not.
Binary oppositions all, leading to a mind-set in which information is interpreted literally to yield unquestionable conclusions that bear no alternative interpretations. When this is all that is taught, there is a tendency for its students and graduates to assume this techno-scientific outlook as applicable to all other matters, and as superior to any other perspective for its ability to deliver certainty. That’s what I call techno-scientific ideology. It is not so much a perspective anymore as a secular religion which excludes all other human logics, dismissing them as misguided or less useful precisely because they don’t offer a deterministic certainty. And probably because they are harder to understand and apply.
The longer term consequence of this shift from education to ideology has been the dilution of human and social values in the professions and technical areas. Without the study of history, graduates know nothing about the changes that led up to their own socio-political environments. Without the study of philosophy and literature, graduates know little about the development of the major philosophical, political, and religious ideas that created Western civilisation. That makes understanding and evaluating currently popular and dissident ideas much harder, and creates a vacuum for ethical considerations, with morality becoming little more than the insight ‘don’t get caught’, or the personal discipline of uncritically obeying a set of rules.
Without a smattering of the history of the arts more generally, including dance, music, painting, poetry, sculpture, and other forms, aesthetic and ethical appreciation is limited, and graduates may find it difficult to tell apart a marketing message from news reportage, or put more simply, to differentiate between information and propaganda.
What is lost with these dimensions of liberal education is critical faculty, and an understanding of the analytical tools available to critique matters not beholden to numbers and formulaic solutions.
Techno-scientific ideology probably serves technical specialists well in their professional endeavours, but it appears to fail at the stated objective for tertiary education: to help shape people as capable civic actors.
If that is tolerable in ‘nerds’, it may be less so in nominal leaders, and professionals, all of whom are equally exposed to an unduly heavy focus on techno-scientific ideology, with its bias towards perspectives based solely on binary oppositions and static models. Perspectives that reduce human concerns solely to quanta to be manipulated by formula.
There is significant concern about this trend in professional training, centred around the ubiquitous Harvard MBA methodology.
Journalist, author and Harvard MBA graduate Phillip Delves Broughton said of his Harvard experience that he was astonished at how numbers-driven the management models were, and how much this abstracted theory from practice, alienating human qualities along the way, as if people could be reduced to mere statistics. He reflected that the entire management culture bred this way was infected with the idea that you could apply a hugely simplified model to contexts and problems that were far too complex and intricate to resolve with such simple-minded solutions.
Kevin Hassett, a Director of Economic Policy Studies at the American Enterprise Institute, explained the disastrous failings of Wall Street banking and share trading practices between 2007 and 2009 as being largely driven by MBA-derived uncritical faith in formulaic models that led to careless arrogance and narcissism rather than professionalism. 
And so what we had is failed models that were believed 100% by narcissists running organisations and that brought the organisations down. If you had people that were worldly and suspicious of these mathematical models running the organisations, they’d be a lot better off.
Retired investment banker and author Will Hopper looks back at Frederick Winslow Taylor as the point of origin for what he calls a form of insanity in management. Taylor was an efficiency consultant who pioneered process re-engineering by making early factory floor processes more efficient. But Hopper says that attempting to apply principles that work well with objects to people is a disaster in waiting.
Taylor is particularly relevant to this discussion because he is closely tied to the normalisation of business process management, the establishment of business schools, the notion that management is a science governed by immutable numbers and method, and that professional managers can be ‘manufactured’ without the ‘apprentice’ process of learning domain knowledge inside an organisation. Domain knowledge being, among many other things, insight into the specific organisational dynamics of product or service development and production, customer demographics, regulatory constraints and opportunities, labour relations, and the lessons to be learnt from failures.
If you don’t possess “domain knowledge”, how do you run a company? Well, you do it through the accounting department … this leads at once to the manipulation of both underlying activities and figures. … to improve the numbers, not to improve the product. This is a theme that runs through the whole of American business, particularly in the 1980s and the 1990s. So the characteristic of the new age of management as a profession, is improving the numbers, not improving the product.
Hopper reflected on the irony of clueless CEOs engaging management consultants to cover their lack of domain knowledge, only to discover that the consultants themselves had been trained the same way, and lacked the same domain knowledge.
Harvard School of Business academic Rakesh Khurana says the anti-government focus of the Reagan-Thatcher era ‘was paradigmatic’ in establishing a mind-set that no institutions beyond capitalist organisations were necessary.
But if you talk to any serious student of capitalism, institutions have always been the bedrock for an effective market system, and an effective society. What is a society without some form of regulative government which not only creates the minimum aspects of behaviour, but the aspirational aspects of behaviour. What is a society without any sense of professions and obligations, and notions of honour, duty? Those are the kinds of things that are really the lifeblood of a society. … In fact I think the thing that was really probably the saddest thing about the last couple of decades … is that the notion of bringing up words like ‘duty’, ‘responsibility’, ‘institutions’, ‘partnership’, you were often derided and seen as soft-minded … they were terminologies that were seen as quaint or seen as naive, if not mocked.
Khurana said that an original academic compact with society had been broken in departing from the idea that management graduates would emerge to ‘put the interests of society and the interests of the economic welfare of their firm before their own individual interests’. Instead he sees management culture being so corrupt that senior managers need to be ‘bribed’ with bonuses and stock options simply to do the job they are paid to do. ‘In no other occupation or profession is that part of the modus operandi.’
Perhaps the strongest critic of Harvard MBA-style management training is business professor Henry Mintzberg. He says its fetishist focus on leadership is really a focus on followership, elevating the management graduate above all others, and removing from the others the duty, power, and responsibility to think and act because they are relegated to a status of obedient acolytes, merely following orders.
Mintzberg pointed at research he did to discover that of Harvard’s 19 star MBA graduates in 1990, ten were failures, four had created questionable reputations for themselves, and only five had tracked well since graduation. His point: no one seems to look at the proven performance of the ‘products’ arising from the prevailing management culture.
We’ve corrupted the whole practice of management, it’s utterly, utterly corrupt from top to bottom; not everybody, but much too much of it is corrupt. It is a cultural problem. And by the way, it’s largely an Anglo-Saxon problem I think. I think the worst of it is in the US, and second is the UK. I think Canada has been smarter. In England the UK for example, there’s a long history not just of MBAs but of accountants running everything. In other words, what you had is a detachment of people who know the business from people who are running the business.
These critiques narrow down to concerns about ignorance, where ignorance is a lack of knowledge rather than any malicious or obtuse tendency. But ignorance of a kind that shuts out a creative, insightful potential for imagining different perspectives, and how such different perspectives might reveal opportunities for improvements to organisational processes, products, services, and strategy alignments.
Unfortunately this kind of ignorance is also expressed in jarring absences of expected aesthetic and ethical behaviours. Have you ever witnessed a high performing IT specialist turn up to work in dirty clothes, evidently also without having showered or shaved? Have you heard conversation between such specialists stray into appallingly uncharitable characterisations of colleagues? I confess I have seen more of this than I thought possible.
Worse, when aesthetic and ethical ignorance is displayed by professionals, the tableaux may be of workplace bullying, of bigotry and pettiness, and of an apparently mean-spirited micro-management that is actually a sign of lacking insight, poor people skills, and absent professionalism.
The hallmarks of professionalism are not complex, but they run counter to a training background delivered exclusively as techno-scientific doctrine. Instead of relying solely on formula and models, professionals ought to be expected to transform that knowledge into expertise. They do this by adapting knowledge according to professional and social insights into methods and practices that achieve both organisational outcomes and positive interactions with colleagues, from the most junior and to the most senior.
In the field of knowledge management, the distinction between a technician and a professional is that the latter can transcend and add to the ‘explicit’ knowledge anyone can acquire or transmit. A technician can only apply it.
Knowledge management specialists describe the transcendent characteristic as ‘double-loop’ learning, which is more commonly referred to as critical analysis and thinking. Where single loop learning offers little scope for proactively addressing failures or shortcomings, double loop learning is a more proactive process of responding to risks and negative outcomes by questioning the underlying assumptions behind method and technique (see figure 2 above).
This is ‘tacit’ knowledge that generally cannot be learnt from textbooks or instruction because it involves an infinitely adaptive analytical ingenuity. Put another way, the methods and techniques developed to resolve a particular problem or attain a particular goal may not be repeatable, and only the experience of the analysis and synthesis involved in that instance remains valuable in approach new challenges.
Critical analysis relies both on the confidence to question orthodoxy, and an awareness of pluralist alternatives derived from a broader body of knowledge and experience than is generally passed on in technical training. It requires genuine education, an inquisitive mind that seeks knowledge beyond formal requirements, and the tenacity to develop a capacity to reconcile two or more apparently conflicting characteristics or objectives. That reconciliation is often refered to as the process of considering thesis and antithesis to resolve contradictions between them and develop a synthesis of the best points in both. In practice, this often means a synthesis of more than just two conflicting perspectives, by thinking creatively in a way that is alien to binary thinking or fixed formulae and models. 
There is a risk for professionals if they question orthodoxy: they might be seen as divisive or even subversive of authority, particularly in organisations with rigid authority and governance structures. There is an even bigger risk, though, of stifling the creativity necessary for innovation by discouraging independent critical thought. The kind of thought that outpaces techno-scientific ideology, and that is not blind to the possibility of client and customer perspectives which arte often not beholden or sympathetic to binary thinking. Put another way, assumptions by techno-scientific rationalists that all others think like them too, or ought to, can alienate customers and misinterpret market signals with disastrous consequences for organisations. Worse, they can also alienate colleagues and build a culture of mistrust in which any change will be difficult to achieve no matter how much customer and market changes demand it.
The wisdom of a once revered management guru, Peter Drucker, may have faded from the minds of professionals for not being new enough, but remains compellingly persuasive. Drucker assumed a noblesse-oblige duty for professionals to become ‘broad humanists’ with ‘social responsibility’. Not for the sake of altruism, but because the education and knowledge inputs necessary for broad humanism and social responsibility are in themselves broader than any technical training. That makes broad humanists more effective in professional rôles.
Having more pieces of information at your fingertips allows you to make more ideational connections between the separate pieces. The greater an exposure to a wide body of ideas, not necessarily related to professional or organisational concerns, the greater the chance for a creative cross-pollination of ideas that may have professional value in an organisational context.
And so … ?
Having now covered the bland and desolate territory of all the reasons why innovation might be stifled or stunted, there are nevertheless still some things that managers and specialist professionals can do to encourage creative, innovative thinking. This begins with acquiring talent.
HR and recruitment
Most organisations exceeding around 20 employees are specialised enough to rely on an HR function for selection of new talent. The larger an organisation gets, the more likely it is this function will be supplied by an external recruitment agency. Unfortunately HR specialists and agencies work on formulaic processes themselves, filtering out many apparently mismatched skills and experiences from the available talent pool.
In doing so, they tend to concentrate on an average set of indicators that whittles down a candidate pool to just those people exhibiting a ‘safe’ average of skills and experiences. It is selection by formulaic risk reduction rather than judgement based on strategic needs. You can visualise this by thinking of a standard distribution on a bell curve, flattening out to a straight line at either end (see figure 3 above).
The ‘cold zone’, or rising part of that curve might be junior qualities, skills, and experience mismatched with the position requirements. The ‘comfort zone’, or highest part of the curve represents qualities matched most closely with functional job specifications. The ‘hot zone’, or the falling part of the curve, shows a diminishing number of candidates whose skills and experience are outstanding, but not literally aligned with the tasks and skills listed in a position description.
The safe middle ground – the comfort zone – is what HR specialists will deliver. Their formula is a selection criteria matrix based on their own limited understanding of functional requirements. There is no room in this process beyond the tick-and-flick methodology to consider more strategic perspectives on new recruits. How often have you read a job advertisement that made you wonder whether its author actually understood the skills and experience being listed?
This is also where prejudice can creep in to exclude non-Caucasians (particularly if their English-language skills are poor), women (particularly those returning from maternity absences), and older people whose skills and experience can be thoughtlessly dismissed as ‘dated’. Aptitude, intellectual flexibility, and strategic fit never get a look in when using the tick-and-flick approach. Not unless management directs HR specialists on how to do it.
Maybe hiring briefs need to be expanded to include indicators of creativity and critical thought: is there evidence of an education wider than technical training? Is there evidence of critical thought and creative problem-solving as an integral part of previous roles? How clearly can candidates express themselves in writing and in more personal circumstances? Such qualities reflect on individual clarity of mind and purpose. By what mechanisms do HR specialists test this, if they test it at all? Do the HR specialists themselves have the necessary training and experience to recognise skills and experience they don’t possess themselves?
A significant hurdle for creative new recruits is to come into a rigid management structure, with line or project managers who are under pressure to perform against rigid metrics that have no room for deviation from a given ‘norm’. ‘This is how we do things here’ is the induction and ongoing mantra.
Playfulness and ‘unproductive’ time spent in exploring ideas and people outside formal structures may be neither valued nor tolerated. And yet this is precisely the interpersonal space in which creativity and innovation occur.
How do you get around this obstacle?
It’s difficult to balance productivity goals with uncertain outcomes derived from what looks like ‘goofing off’. But successful managers always find a way to let their staff breathe. To let them express their individualities. To let them come together outside the strictures of functional teams, and to permit the cross-pollination of ideas.
That kind of human focus is much more likely to come from managers or team leaders who have not just risen through technical or specialist ranks by seniority; they have some leadership and management credentials, expertise, or qualities independent and on top of professional and technical specialisations.
One way such managers might humanise their team environments is to encourage a regular semi-social ‘invitation’ by one team to show its appreciation of another over morning tea, or late afternoon drinks and finger foods. It is an ice-breaker situation that can be gently guided to encourage collective problem-solving, or just better cooperative relationships generally.
Another is an informal immersion programme, allowing individuals to rotate through other teams to better understand what they do and how they do it. That also allows for the creation of informal networks in which ideas have a chance to be cross-pollinated.
The best method I’ve observed in practice is a fairly formal one: the planning day. A day, or a part of a day, when the leader takes the team out of its operational environments and routines to consider its goals, performance, and challenges. A regularly scheduled day on which every team member is called on to come up with answers on how to get things done, and how to do things differently. A day on which the team offers feedback on what worked best, and what didn’t. A day on which team members discuss how they want to develop their skills and careers.
Planning like this should be carefully documented for future reference and comparison with actual achievements down the track.
If that sounds really basic or simple, it is important to keep in mind two critical factors that are less basic and simple:
- this approach only works in a mature, high-trust environments in which team members are professionals, mutually supportive, and certain not to face censure or reprisals for expressing even the most critical or outlandish feedback and ideas; and
- the entire team must contribute to goal-setting, backlog listing, prioritisation, and assignment of responsibilities to tasks.
A third hurdle to innovation can be a pre-determined group process of evaluating and refining ideas. Handled crudely, this process will kill creativity and innovation by imposing a deference to authority, and a group-think discipline.
Ironically the group-think process is precisely what is proposed in a forthcoming Harvard Business Review article by leadership and innovation professor Roberto Verganti: ‘The Innovative Power of Criticism’. In fact Verganti reduces critical thinking to formula, effectively recreating it as part of the techno-scientific ideology, but thereby neutralising its effectiveness. There is a danger that even well-meaning managers will stifle creative thinking by creating a fixed model and prescribed techniques that effectively achieve the opposite of intended outcomes.
The group-think that flows from the blinkers of professional specialisation and narrowly formulaic approaches to problem solving always heads for what is assumed as safe territory. Safe territory is formula and method. It rarely results in innovation rather than incremental improvement. Not a bad outcome in itself, but it is not innovation.
Some organisations are just not built for creativity and change. If KPIs and management structures are too rigid, and too far removed from operational activities, no creative spaces may exist for lateral thinking and the playfulness necessary to discover what new approaches might work better than the organisational standard. This profile is particularly common in public sector organisations, but also emerges in larger enterprises in the form of a creeping bureaucratisation of support functions and silo responsibilities.
To get around such ossifying tendencies, executives might consider breaking up functional divisions into smaller, more nimble matrix teams. Teams tasked not just with linear productivity goals, but also with inventing the future: a to-be mode of operation with new activities, processes, and strategies, and with clearly visualised ideas about potential changes driving a future client or customer base.
In smaller organisations it might be easier to break out of an orthodox, sterile functional paradigm by rotating leaders out of their comfort zones, building ‘fun’ time into work patterns, and encouraging bottom-up feedback with immaterial (non-financial) rewards for encouraging a flow of ideas on how things could be.
All this is not to say that formula and routine-driven staff and leaders are not necessary. They are the backbone of ‘right-now’ operations. It’s just that they are unlikely to change what they do or how they do it until they are guided to accept new methods that they, themselves, are not creative enough to develop and initiate.
The outsider perspective
Verganti’s four step formula includes a final scrutiny of group-developed ideas by ‘outsiders’. That sounds like the standard consulting model, with someone like Verganti being brought in to create a one-off creativity ‘event’. But nothing in what he writes talks about embedding the capacity for that creativity inside an organisation.
We should think about all the variables an outsider might bring to our attention, and then about how to embed that outsider perspective as an integral part of our organisational culture. This isn’t just a case of de Bono’s many hats, or mystery shopping. Nor does it rely on recruiting an unlikely complement of savants. It is about fostering independent judgement.
We should aim at the kind of judgement that is more than hiding behind ready-made models to avoid responsibility. That includes asking pointed questions: Why should we take this departure from orthodoxy? What do we risk by doing it? What do we risk by not doing it? Does it mean abandoning orthodoxy, or just supplementing it?
In that thinking it should become obvious, too, that all tasks or job roles that devolve into the application of formulaic method are redundant. They will eventually be automated precisely because they require no human judgement. Just predictable steps in a process.
The path to innovation is not certain and cannot be reduced to predictable or replicable formulae. It absolutely requires individual contextual judgement, and every time, not just once and then repeated as formula.
It is risky as hell. But is the only way to innovate. Innovation driven by changing value propositions, not change for its own sake. Innovation to deliver sustainable strategic advantages and the kind of career development and satisfaction that inspires the best of employees to commit themselves to the best of workplaces.
This essay also available from Google Drive as a PDF (4MB)
 Eskow, R. (2015, 12 April). ‘Rise of the techno-Libertarians: The 5 most socially destructive aspects of Silicon Valley’. Salon.
 All quotes in the critique section come from the transcript of the Australian Broadcasting Corporation’s Radio National Background Briefing program on Sunday, 29 March 2009 for a segment entitled ‘MBA: Mostly bloody awful.
 After a model developed by Chris Argyris in the early 1990s, and explained in Argyris, C. (1991). ‘Teaching smart people how to learn’. Harvard Business Review, 69(3), 99-109; and Blackman, D., Connelly, J., & Henderson, S. (2004). ‘Does double loop learning create reliable knowledge?’ Organization, 11(1), 11-27.
 See, for example, Edwards, S. L., & Bruce, C. S. (2006). ‘Panning for gold: Understanding students’ information searching experiences.’ In C.S. Bruce, G. Mohay, G. Smith, I. Stoodley, & R. Tweedale (Eds.), Transforming IT Education: Promoting a Culture of Excellence, (pp. 351-369). Santa Rosa, CA: Informing Science Press; and Gerami, M. (2010). ‘Knowledge management.’ International Journal of Computer Science and Information Security, 7(2), 234-238.
 See, for example, Aho, I. (2013). ‘Value-added business models: linking professionalism and delivery of sustainability.’ Building Research & Information, 41(1), 110-114. doi: 10.1080/09613218.2013.736203; Schein, E. (2010). Organizational Culture and Leadership (4th ed.). New York, John Wiley & Sons [PDF version]; and Sinclair, A. (1991). ‘After excellence: Models of organisational culture for the public sector.’ Australian Journal of Public Administration, 50(3), 321-332. doi: 10.1111/j.1467-8500.1991.tb02293.x.
 See, for example, Arling, P.A., & Chun, M.W.S., (2011). ‘Facilitating new knowledge creation and obtaining KM maturity.’ Journal of Knowledge Management, 15(2), 231-250. doi: 10.1108/13673271111119673; Burke, R., & Barron, S. (2014). Project Management Leadership: Building Creative Teams (2nd ed.) Chichester, U.K.: John Wiley & Sons [PDF version]; and Dauber, D., Fink, G., & Yolles, M. (2012, April 17). ‘A Configuration model of organizational culture.’ Sage Open. doi: 10.1177/2158244012441482.
 Drucker, P. (2007). The Practice of Management (rev. ed.). New York: Harper Collins Publishers [PDF version].
 Verganti, R. (2016, January/February). ‘The Innovative Power of Criticism.’ Harvard Business Review, 94(1-2).