Self-Improving Systems

General systems theory was outlined in the post-WWII era when innovative thinkers began to consider how and why biological and ecological systems worked — or sometimes didn’t work.  Subsequent applied and theoretical work expanded the use of these insights.  Most importantly, we now understand that successful systems must be self-preserving, self-controlling and potentially self-improving.

A self-controlling system is often described using the thermostat model.  A system has a goal, a measurement device, the ability to compare actual with desired results and some action taken to return the system back within its control limits.  More sophisticated systems have secondary feedback loops to check the measurement, feedback and action steps.  Self-improving systems also have some built-in driver that improves the goal and results through time.

Most development economists have concluded that in the long-run productivity improvements are the key to economic well-being, far surpassing the contributions of simple resource availability.  Productivity improvements are created by individuals’ insights and brilliance, but more often by the cumulative results of self-improving processes.  Hence, our economic future depends upon the broad application of self-improving systems.

The biological model of evolution shows that “survival of the fittest” results in populations that are best ready to thrive in the range of environments encountered historically.  On average, this means that existing species are well positioned for most futures.  It does not rule out decimation due to some new environment, competitor or predator.   Biological pressures through the impact of pollution or global warming could threaten the beneficial effects of the biological model on economic growth.

Biologists and some anthropologists also say that our natural family and other small groups have developed to meet the needs of the species.  In spite of the many changes in culture since the “enlightenment”, these built-in relationships seem solid and provide a self-preserving parenting and small group cycle.

The development of the scientific method and use of peer reviews transformed science from natural philosophy, alchemy and astrology into a cumulative force for progress in scientific understanding.  This force has had a great economic benefit, expanding the use of the scientific method to a broader and broader sphere.  While philosophers and politicians raise valid questions about the ethical use of scientific discoveries, the march of science continues.

Representative democracy with “checks and balances” has also functioned as a self-improving system.  We now understand the need for cultural support for the rule of law.  We know that a variety of representative democratic systems can work well.  We know that there are sometimes populist, military or ruling class pressures that can undermine or destroy a democratic system.  We understand that democracies are often slow, sub-optimal and inconsistent.  Nonetheless, representative democracy has generally been a force for economic progress.  The consensus that western style democracy will be the dominant form of enlightened governance model was much stronger a decade ago, but remains the likely choice for most countries.

Economic systems like capitalism and international trade can also be seen as self-improving systems.  Adam Smith’s “invisible hand” and David Ricardo’s principle of comparative advantage lead competing interests to naturally improve their performance through time.  Modern economists generally agree that capitalism is not automatically “ideal” due to market failures, monopolies, public goods, externalities, unequal distribution of income, deadweight costs of booms, busts and bubbles, and the potential sustained waste of resources due to inadequate demand.  Recommended solutions to these shortcomings that can be implemented through the political process.  As with representative democracy, some form of regulated capitalism is an ongoing positive force for economic growth.

Finally, the systematic adoption of formal quality measurement and improvement systems by most organizations is another form of self-improving system.  By clearly defining goals, measuring progress, adapting and providing support structures that encourage process re-engineering and continuous process improvement, organizations have found that annual productivity improvements are possible in nearly all areas.  The quality revolution continues to expand its reach, moving from operations areas into overhead, service, government and not-for-profit applications.  The set of quality tools and best practices continues to grow.  The pressures of the economic and political marketplaces make sure that this will be a source of progress.

There are areas of modern life where self-improving systems do not provide built-in assurance of progress in the future.  Culture, religion and international relations do not work as self-improving systems today.

Historically, culture continued through inertia or the reinforcing interests of the ruling groups in society.  Without changes in the environment, a self-preserving system was common, even if a self-improving system was not.  Today’s increased level of global communications and cultural awareness provides support to avoid the total disintegration of culture.  The lack of thought leaders or leading cultural influencers today means that subcultures may improve, but the overall culture is not positioned for progress.

Religions were historically integrated with culture and reinforced them.  The “enlightenment” development of secular viewpoints and increased awareness of world religions has greatly complicated attempts by any one religion or ecumenical group to create a self-improving religious system.  Historic attempts to more deeply analyze a religion often resulted in inflexible forms such as scholasticism.  Attempts at reformation with ongoing evolution of doctrine resulted in splinter groups or fatal dilution of core content.  Within the secular humanist tradition, some progress is made through self-help books and applied psychology, but most observers would say that the self-awareness of existential philosophy has been a mixed blessing for people trying to create their own forms of meaning in life.

International relations is also a system without inherent stability.  Contradictory philosophical views dating back to the Greeks have enthusiastic supporters.  The idealistic goals of the United Nations and other world organizations are appealing, but the institutions do not clearly ensure the ongoing improvement of the human condition.  Greater economic and political integration in Europe is offset by the expansion of the number of nation states.  Mutually assured destruction evolved as a self-preserving system at a time of 2 superpowers, but provides no such assurance today.  The rise of Brazil, Russia, India and China to complement the US, Europe and Japan creates a multi-polar world without a clear system for ongoing improvements or avoidance of major conflicts.

The rise of self-improving systems in biology, science, economics, national governance and quality processes provides hope for a future of unlimited possibilities.  The lack of self-improving systems for culture, religion and international relations raises major concerns for the future.

The Quality Paradigm

The Quality paradigm has emerged as a significant competitor to the Financial paradigm.  The Financial paradigm says that organizational results are best delivered through the sum of individual rational decisions focused on incremental costs and benefits.  The Quality paradigm agrees that costs and benefits matter, but focuses on the underlying process as the primary driver of minimizing inputs (costs) to produce a given output (benefits).  The Quality paradigm has evolved from the “scientific management” studies of “time and motion”.  It has a process engineering focus, aiming to optimize the relationship between inputs and outputs.  Improvements are inherently valuable, without tallying financial valuations.

The Quality paradigm made progress because its effectiveness in Japanese manufacturing became apparent by the 1970’s.  It also gained favor because Western organizations, relying on the financial decision-making tools, were clearly not delivering optimal results. 

The Quality advocates made five major criticisms of the existing practices.   The practices greatly underestimated the total cost of poor quality at 1-2%, while the total costs ranged from 5-10%.  The financial approach often created a cost reduction mindset when greater opportunities existed for improved revenues and margins through quality products and customer service.   The marginal approach overlooked less material cost reduction opportunities that were very significant in the long-run.  It optimized individual functions, while ignoring connection costs.  It underutilized the assets of workers who could make improvements.  While some of criticisms were misplaced or exaggerated, the Quality Paradigm presented a compelling story that lead to changes.  The new, process-based approach was delivering value that the old approach had missed.

The Quality paradigm delivered several insights that could be repeatedly applied to reduce costs, reduce defects, increase volumes, increase timeliness and better meet customer needs.  First, a controlled system inherently reduces errors and risks and leads to improvements.  Second, examining a whole process in terms of well-defined desired outputs focuses staff on the greatest improvement opportunities.  Third, the key to understanding process failures is through understanding the drivers of variability.  Fourth, variability naturally accumulates through a process, leading to greater defects and costs.  Fifth, inventory of time and goods hides current performance and improvement opportunities.  Sixth, there is no practical limit to the improvements possible in reducing variation, reducing defects or improving input/output ratios.  Seventh, a quantum leap process break-through is usually possible.  Eighth, in the long-run quality improvements usually have a net benefit, rather than a net cost.

In the last two decades the Quality paradigm has come to complement the Financial paradigm, leading to a balanced scorecard approach to strategic planning with both financial and operations measures in the performance dashboard.  Finance continues to emphasize costs and benefits while Quality focuses on the underlying processes.  This combination approach is delivering more valuable results for most firms today.

The Financial Paradigm

The financial decision-making paradigm was developed in the 19th century by the “marginal” school of economics and refined into modern financial tools by the 1950’s.  In essence, it says that by comparing incremental benefits with incremental costs, that rational decisions can and should be made.  While academic economists refined the exact conditions under which this is logically true, practical business professionals have simply just adopted these tools.  Business students learned to choose the greatest net benefits.  Some also learned to calculate the risk-adjusted, interest-rate discounted incremental after-tax cash flows.

In practice, finance professionals and business decision-makers have seen limitations in the theory, but adapted it to make “rational” decisions.  If qualitative factors exist, they are ignored, translated into numbers or considered separately.  If key numbers are unknown, they are estimated, modeled or limited.  If factors are interrelated, a simulation model is run or lesser factors omitted.  Cash flows 30 years out are ignored due to their low present value.  Rules of thumb are used as simple linear relations.  The whole is defined as the sum of the parts.  The principle of diminishing marginal returns is used to eliminate inconvenient, minor or detailed items.

For short-term or long-term decisions, the standard financial decision making tools are adapted to meet most situations.  With experience and business judgment, decisions are made with a high degree of confidence using this single approach.

In addition to the common “adjustments” accepted by financial analysts in practice, there are deeper criticisms regarding the financial paradigm.  It is inconsistent with the historical, accrual cost approach required in public accounting.  Managers are unable to estimate factors, so they are constructed by analysts.  For major investments or decisions, the inherently qualitative factors may be most important.  Fully-loaded costs are used throughout most financial systems, so decisions are guided by “the numbers”. Purely financial incentive systems lead to padding, managed numbers and missed opportunities.   Focusing on financial results alone leads to neglect of the asset, operations and customer levels of the balanced scorecard.  Accounting systems are not structured to monitor key decisions, but to eventually report historical costs.  The financial decision making paradigm does not directly help managers to solve problems or serve customers, but it can create an adversarial relation between line managers and the financial staff.

The 1980’s “quality revolution” lead to a time when there was significant support for a variation on Shakespeare’s maxim: “first, let’s kill all of the accountants”.   Since then, finance and accounting professionals have fine-tuned their models, linked to the balanced scorecard framework, enhanced allocations through activity based costing, simplified ROI models, learned quality paradigms and deliver a mixed dashboard of financial and operations measures.

 The financial decision-making paradigm remains at the core of modern business decision-making because it does a good job of organizing the key factors, determining the level of detail needed to make good decisions and communicating those decisions to others in a consistent fashion.  No paradigm is perfect, but the marginal cost-benefit approach is doing very well moving through its second century.

Production Strategy

Financial success often depends upon making wise strategic and structural decisions.  The Pareto Principle or ABC rule says that 20% of a firm’s products will deliver 80% of its volume or profit.  For most organizations, on a purely mathematical basis, some version of the Pareto Principle will hold true.  It may be 10% or 33% of the products accounting for most of the results, but this clustering is nearly universal.  Focusing on those activities that provide the greatest “bang for the buck” is a good strategic and tactical approach to business.

Production methods (including services) can also be classified into ABC categories.  The oldest method: custom or handicraft production can be labeled C.  The big breakthrough of standardized parts and mass production can be labeled A.  The hybrid products delivered by modular stages as in an assembly line can be labeled B.  Again, most organizations find themselves with a combination of mass (A), modular (B) and custom (C) produced goods. 

Since mass production has inherent advantages and is the lowest cost approach, firms should add modular products when the incremental benefits outweigh the costs.  Moving to the custom level involves the same benefit/cost comparison.  The incremental percentage margin is set by the marketplace and tends to decline through time as competitors add similar products, better features and benefits are offered and processes are refined and costs removed.   Sales and product managers will usually overestimate the margin benefits, while finance and production managers will underestimate them.  On the marginal cost side, the roles will often be reversed. 

The relative benefits and costs will vary from case to case, but the general structure and decisions will always need to be addressed.  In order to generate higher margins, firms need to offer products which appear to have greater custom appeal and this requires additional costs.  Firms which neglect to evaluate these trade-offs or which allow case by case negotiations often find that they have too many custom products and too little profit — or too few value-added products and too few customers.

There are four strategic approaches to this inherent trade-off.  First, firms can be disciplined and choose just one of the 3 production types.  They can deliver goods in a narrow range (A), using focused factory techniques.  As Henry Ford said, “any color you want as long as it’s black”.  They can adopt an operational excellence strategy and reduce costs through time.  Or, they can develop a modular strategy with well-defined processes for production, product development and marketing (B).  By leveraging the efficiencies of a set of highly effective modular processes, they can deliver new products and services at moderate volume with higher margins.  A product innovation strategy can be delivered this way.  Finally, they can choose a customized production strategy (C) and deliver highest margin niche products to specialized users.  This approach can attempt to leverage mass or modular production, but the real focus is on developing or adapting products to meet specialized needs.  This fits best with the customer intimacy strategy.

Unfortunately, the explosion of product choices in the 1970’s and 1980’s resulted in most firms delivering some messy, unintended combination of A, B and C products.  The mass production world moved from 90% A and a little B to 50% A, 40% B and 10% C in many cases.  Some firms even found one-third each as their production profile.  A second overall strategy has been to outsource the production of A level mass production items to the lowest cost source: in a focused factory, to a market leader, as an import, as a drop ship or through a partner.  A third strategy is to develop a truly modular production line ala Dell and move all production through a single highly refined process.  A fourth strategy is to outsource the customized work to partner firms, IT implementation shops, other engineering firms or to repackaging firms.

It is possible to combine mass, modular and custom product deliver flows within a single firm, but it is not easy.  At a minimum, firms need to make decisions in these terms, monitor the results and adapt to ensure that the marginal benefits justify the marginal costs.

Strategic Planning: Balanced and Disciplined

Of the many planning methods proposed and widely used in the last two decades, two stand out for their impact and longevity.   Michael Treacy and Fred Wiersema’s “Discipline of Market Leaders” was published in 1994, closely followed by  Robert Kaplan and David Norton’s “Balanced Scorecard” two years later.  How do the two interact ideally?  Can a strategy process and strategy be both balanced and disciplined?

 The discipline of market leaders is to prioritize resource investments into one dimension of strategic choices, while making modest investments in the other dimensions.  Treacy defines the generic dimensions as Operational Excellence (cost reduction), Product Leadership and Customer Intimacy (best total solution).  Based upon market opportunities (customers and competitors), wise organizations choose one dimension for emphasis and align all other variables to support that choice.

 The balanced scorecard emphasizes the importance of measures and a complementary planning process that ensures that four levels of activity are reviewed:  Learning and Growth (asset management, broadly speaking), Internal Processes (operations, product development, customer interface – the how), Customer Satisfaction and Financial Results.  Asset management feeds optimal processes delivering customer satisfaction and financial results. 

 The two approaches seem to conflict: one says focus (discipline) while the other says diversify (balance).  The resolution lies in their application.  The balanced scorecard provides a universal framework of the factors that drive business success in a logical sequence.  Organizations still have to compare their direction (mission, vision, values) with their situation (SWOT) in order to determine critical success factors.  CSF’s help the organization to select those 10-20 measures that best cover the landscape. 

 The discipline of market leaders is making strategic investment choices, while the balanced scorecard is using a planning and control process that highlights opportunities and links strategy to results.  The advice from Treacy and Wiersema is to focus on a single dimension, rather than to spread the investments evenly.  In balanced scorecard terms, this means that the measures will emphasize different dimensions.

 Focusing on operational excellence indicates the use of more measures in the Internal Processes and Asset Management levels.  Customer intimacy requires customer satisfaction measures, key internal process measures that impact customers and a touch of asset measures regarding the adequacy of the products offered.  Product leadership requires measures of customer satisfaction with the features and benefits set offered, the product development process itself and the availability of key technical resources that create products.

 Organizations will benefit from finding ways to apply the insights from both camps.  Strategy and structure matter more than ever.  The best answers continue to be “both/and” rather than “either/or”.

 http://www.amazon.com/Discipline-Market-Leaders-Customers-Dominate/dp/0201407191/ref=sr_1_1?ie=UTF8&s=books&qid=1262473135&sr=1-1

 http://www.amazon.com/Balanced-Scorecard-Translating-Strategy-Action/dp/0875846513/ref=sr_1_1?ie=UTF8&s=books&qid=1262554699&sr=1-1

 http://www.slideshare.net/kennyong/balanced-scorecard-for-strategic-planning-and-measurement