by Joseph Blakeman
Center for Urban Transportation Studies
University of Wisconsin-Milwaukee
History of Benchmarking
Gov't vs Private Roles in Benchmarking
Do's and Don'ts
A Case Study
What is Benchmarking?
The term benchmarking was originally used by early land surveyors, who used the term to identify a fixed point from which all other measurements are made. In the late 1970's however, it took a broader meaning. Applied to an organization, benchmarking is a process to determine who else does a particular activity the best and emulating what they do to improve performance. A more formal definition is "simply the systematic process of searching for best practices, innovative ideas and highly effective operating procedures that lead to superior performance (1)."
Businesses such as AT&T, Motorola, Xerox, as well as most major corporations and many smaller ones have embraced benchmarking as standard operating procedure since the mid- to late 1980's. It has a particular significance in technology, where the rapid change of the business climate can leave a company out in the cold. However, governmental and non-profit organizations have begun implementing benchmarking as late as the early 1990's.
The issue of government benchmarking was among many in Vice-President Gore's National Partnership for Reinventing Government Report (NPR). The report states that "federal agencies have been reinventing their operations to become more businesslike, many have been benchmarking against worldclass private sector companies, other organizations, and other federal agencies that have become really good at what they do (2)." This led to the Federal Benchmarking Consortium Study Report in February 1997. It is being used by agencies such as the EPA and NASA, as well as the City of Reno and the Salt Lake City and various other federal, state, and local government agencies to improve their procedures and practices.
Benchmarking is both different and similar in ways to other types of business improvement practices. These practices include total quality management (TQM), reengineering, and performance measurement.
Benchmarking vs. TQM
Total Quality Management, or TQM for short, consists of three main points (3). First, collaboration with suppliers to ensure that the supplies utilized in work processes are well designed and fit for use. Second, taking continuous employee analysis of work processes to improve their functioning and reduce process variation. Third, maintaining close communication with customers to identify and understand what they want and how they define quality.
TQM works by either one of two processes, consultant-oriented TQM or project-oriented TQM. Consultant-oriented TQM typically involves the creation of separate quality control bodies that oversee the implementation of improvement and the control of quality improvement procedures. This process is generally problematic in the public-sector because the TQM bodies exist outside the chain of command, confusing accountability. These bodies often fail to become a part of the hierarchical structure of government organizations. In project-oriented TQM, some of the shortcomings of consultant-oriented TQM are addressed. This entails including all employees in the process and including their needs as well as the customer's, as well as using established procedures as a foundation instead of implementing new ones.
In general, TQM uses internal methods and the ideas of people within an organization to improve itself from the inside out. This does not include comparing one's organization to that of another, which is critical in benchmarking. However, due to the potential unwillingness of employees to accept ideas without understanding their logic, both TQM and benchmarking require the input of everyone in an organization and a general resistance to change must be overcome.
Benchmarking vs. Reengineering
Another type of method of performance review and improvement is reengineering. Reengineering has been defined as "the fundamental rethinking and radical redesign of business processes to achieve dramatic improvements in contemporary measures of performance, such as cost, quality, service, and speed (4)." This generally involves discarding old practices with completely new ones. The new practices are usually determined from a process that requires a team and consultant to come up with, measure, and convince others to take up new ideas.
Reengineering can be problematic in government because they are don't have profits and completely discarding old processes and breaking down barriers between departments run into political, trade union, or other pressures. This sometimes results in the creation of new agencies rather than overhauling old ones. Reengineering is very expensive and prone to failure rates in over fifty percent of cases. It also requires TQM after its successful implementation.
While reengineering is cutting-edge and dramatic, and encourages employees to think big, it is still an internal process. It does not involve the practices of one organization to compare itself to those of another. While benchmarking may result in the use of completely new ideas similar to reengineering, it often is simply improving on existing ones. In addition, after performing reengineering, organizations often turn to TQM, to maintain their success.
Benchmarking vs. Performance Measurement
“Performance Measurement is government's way of determining whether it is providing a quality product at a reasonable cost (5).” In fact, more than half of all U.S. cities collect performance measures of some type (6). Performance measurement is also used in both government and the private sector for reporting to management.
Performance measurement can be used to measure such things as productivity, effectiveness, quality, and timeliness. When performance measures are used extensively and consistently they can be quite effective improving an organization’s output. Government agencies used for the following reasons (7):
• Better decision-making: it provides managers with information to perform
their management control functions;
• Performance appraisal: it links both individual and organizational performance to aspects of personnel management and motivates public employees;
• Accountability: it fosters responsibility on the part of managers;
• Service delivery: Improvements in public service performance;
• Public participation: clear reporting of performance measures can stimulate the public to take a greater interest in and provide more encouragement for government employees to provide quality services; and
• Improvement of civic discourse: it helps to make public deliberations about service delivery more factual and specific.
What benchmarking does is to use data collected as performance measures and compare it to other organizations that perform those duties or processes. By comparing to other organizations through benchmarking, performance measurement becomes something other than "bean counting". However, since performance measurement is a prerequisite to benchmarking, the two have become intertwined, but they are not the same.
||Develop dialogue within a process to improve it through gradual increments||Take measurements for comparison and improvement||Develop completely new methods for obsolete or failing processes||Compare processes with others who do the same and determine best methods|
These processes can be thought of as the following situation.
An organization is seeking improvement. First, it takes performance
measures and determines what processes need to be improved. Then,
TQM can be employed to improve these processes internally. In addition,
an organization may look beyond itself to other organizations for insight,
and benchmark. If both TQM and benchmarking are not enough of an
improvement, an organization may seek re-engineering, and restructure the
whole process. In any case, performance measurement and TQM will
need to be employed to insure that the processes developed remain at the
proper levels. Finally, the whole process will need to be repeated
as new improvements are needed.
G.H. Watson outlines the development of benchmarking in five phases (8):
Phase 1 1950-1975
Phase 2 1976-1986 Competitive Benchmarking
Phase 3 1982-1988 Process Benchmarking
Phase 4 1988+ Strategic Benchmarking
Phase 5 1993+ Global Benchmarking
Reverse engineering was tearing things apart, examining them, improving them, and putting them back together. Benchmarking really began in its modern form with the introduction of competitive benchmarking began with Rank Xerox, and its implementation of benchmarking in beginning around 1976. This was followed by process benchmarking which included looking for ideas outside of the direct competition. Strategic benchmarking involves fundamentally changing the business, not just the process (9). Global Benchmarking is the newest and involves comparing your organization on a global scale.
The Xerox Case
In the 1970s, Xerox was the largest manufacturer of copiers in the world. However, Japanese manufacturers were making better copiers, selling them for less, and making a good profit. This prompted the company to directly compare itself with its direct and best competitors to determine what it could do to increase productivity while decreasing costs.
The results from their benchmarking were astonishing. They found (10):
• Xerox's ratio of indirect to direct staff was twice that of direct
• It had nine times the number of production suppliers;
• Assembly line rejects were in the order of ten times worse;
• Product time to market was twice as long;
• Defects per 100 machines were seven times worse.
However, Xerox's Japanese joint venture, Fuji Xerox, was performing well. The problem was large, and forced some changes.
Over the next five years, Xerox would have to increase productivity 18% to keep up with its competitors. It did this through a strategy known as leadership through quality, which became the foundation of the revival of the company. For example, Xerox benchmarked L.L. Bean, a Maine outdoor sporting goods retailer, because of their excellent warehouse procedures that are now the standard at most companies. It also benchmarked almost 230 performance areas by the time it won the Malcolm Baldridge National Quality award in 1989 (11).
Public Sector Cases
Due to the relative infancy of benchmarking in the public sector, results of many cases are still not fully known. The demand for better for less has many taxpayer wanting a government that acts like a business, and treats them as a paying customer. In and age when everything is available at the click of a mouse or a swipe of a card, no one wants a government full of red tape and long waits.
A couple of early federal examples are the Bureau of the Census and the IRS. The Bureau of Census set up four teams that were each to do a specific task. One team withdrew due to a lack of support from team members. Yet another withdrew because it could not find sponsors. A third team took a very informal approach that proved of little use. Only one team finished, but it proved difficult to even find a room to meet and get all the team members to be there at the same time.
The IRS, however, succeeded in benchmarking its information system. The IRS hired outside consultants. They started by speaking to top IRS executives. Then the executives showed managers examples in benchmarking. The managers then decided what to benchmark. This was followed by literature review and outside contacts. Finally, using a method similar to Xerox’s they benchmarked four areas (12). These included software measurement, picking and packing in form distribution centers, personnel recruitment and retention, and assistance at walk-in taxpayer sites. As a best-in-class performer, the Ogden, Utah site was emulated for its recognized service record. It went so well, that they now require a benchmarking study as part of standard methodology (13).
Successful benchmarking was done by NASA in the early 1990s as well. NASA conducted 47 separate benchmarking studies. They have been so successful that other federal agencies have turned to NASA for help in benchmarking.
Government benchmarking also reached state and municipal levels. States such as Maryland and Oregon benchmark various agencies very well. Municipalities that have benchmarked successfully include Reno, Boston, Salt Lake City, and Indianapolis. Reno, frustrated by traffic accidents, benchmarked survey techniques from Harrah’s Casino Hotels. After surveying residents and noting complaints, Reno Police were able to write fewer tickets while lowering the number of accidents by twenty percent (14). As a result, the department’s approval soared from forty percent to ninety percent.
Growth and Demand
As mentioned earlier, benchmarking is a standard tool for most private
sector companies. However, in the case of government, it is growing
very rapidly. Many agencies and organizations nationwide and worldwide
are beginning to look at benchmarking as a tool to help them achieve better
results for less. This can be much easier in government in some cases
because sharing information is "the cheapest and most efficient, effective,
and compelling means for improvement performance (15)."
Benchmarking has many advantages which will be discussed in this section. Rank Xerox's experience with benchmarking led them to the following benefits (16):
• Benchmarking brings out the newness and innovative ways of managing
• It is an effective team building tool.
• It has increased general awareness of costs and performance of products and services in relation to those of competitor organizations.
• It brings together all the divisions and helps to develop a common front for facing competition.
• It highlights the importance of employee involvement and, as such, encourages recognition of individual/team efforts.
These illustrate the benefits of competitive benchmarking, which is used in both the business sector and the public sector. Some of the advantages that will be discussed here are team building, comprehensibility, flexibility, creativity, and evolution.
Benchmarking cannot be successful without the full involvement of everyone in contact with a project. It creates a united front for an organization and gives those who work within it a common goal to accomplish. It also includes the ideas and concerns of those affected.
Along with good work on such a project comes recognition. As mentioned there are several awards for an organization to receive. Within an organization may be yet more awards for individuals, teams, or agencies that have exemplary performance. This is achieved by setting goals, then meeting, or exceeding them.
Unlike some methods, benchmarking is easy to understand. This is due largely to the fact that benchmarking produces a direct comparison to another organization. After determining whom to follow, you study what they do, and emulate it. There is no misunderstanding of the overall goal of being the best.
Benchmarking is flexible and can be interdisciplinary. Benchmarking can be used on almost any organization, public, private, or, non-profit. It can be fitted to a large multinational corporation or a local shop, from a federal agency to the government of a small village.
Identifying the best does not necessarily mean that a competitor has the best solution. It may be a company who just does something well. When Rank Xerox needed to make its shipping better, it relied on L.L. Bean. This sort of out-of-the-box thinking can create new standards rather than emulating someone else's practices.
Sometimes an organization might know where their goals are, but the path to meet them is not clear. Furthermore, even if another organization is perceived to be doing something the best, it does not mean it couldn't be done better. After clearly defining goals, however, it can be easier to come up with new, innovative ways of getting there. It could also create news ways of obtaining information or making partnerships, such as Remington, a shotgun shell manufacturer, getting information on how to make shinier shells from Maybelliene’s lip stick containers.
Benchmarking evolves with the consumer and doesn't require a large up-front
cost. As things change in the world, so does who is the best.
Because benchmarking involves constant reiteration, evaluating and changing,
it changes as the market or consumer does. Although benchmarking
is constantly in change, it isn't a big price tag up-front. All one
needs are office supplies and a list of the best performers to get started.
Benchmarking can require a large investment in time, labor, and capital. Costs for a large project can easily reach into the hundreds of thousands of dollars. These can be minimized through careful, thoughtful, and deliberate planning. As Robert Graham of Medrad notes, “Typically, there are expenses related to travel as well as indirect costs associated with employee time devoted to trips and team meetings. With careful planning benchmarking costs can be kept to a minimum (17).”
The size and scope of a benchmarking project is related directly to the cost. An easy way to minimize costs is to take on a stepwise approach. This minimizes the amount of investment and risk taken concurrently.
Organizations can pool resources by taking joint benchmarking projects and dividing costs accordingly. This is more easily done in organizations that are not directly competing, such as government agencies. Various organizations have pooled their resources and knowledge into benchmarking groups.
Many consultant firms will also aid an organization in a benchmarking project. These organizations have the technical knowledge and experience to more efficiently gather and interpret data. Careful background research of a consultant must be made to make this process more effective and it comes at a price. However, this does not require hiring additional staff or expanding roles of current staff.
Education and Travel
Benchmarking does require education and travel costs. Once a team is chosen, they often need to be educated on the methods of benchmarking. This is accomplished through workshops, seminars, meetings, and courses. Then, this information must be disseminated to others. When researching organizations, sometimes it is best to see the organization in action and meet with the team that performed and implemented the changes to gain first-hand knowledge of the processes involved.
One of the most important methods of keeping benchmarking costs low
is effective communication. This involves knowing what you need and
where your own deficiencies are and sharing information about yourself.
Also, informing others inside of your organization of what has been learned
through reports, analyses, etc. and its method of implementation involving
flowcharts, matrices, schematics, etc. is critical. Clear communication
also lets management know how the project is going and its status.
This reduces confusion and conflicts among management and the team and
among team members themselves.
As mentioned earlier, benchmarking is flexible to almost any application. The process of how to go about benchmarking varies as much as organizations themselves and their ideologies do. Processes vary widely by goals, philosophies, industry, cultures, management plan, and organizational structure. This section will explain some of the explicit processes developed by companies that have benchmarked. The most general and best for a first-try at benchmarking is the most general process, which is the Motorola Five-Step Process described later. An experienced benchmarker such as Rank Xerox, use a much more detailed process.
Rank Xerox Process
Rank Xerox revolutionized business thinking with its benchmarking plan. It had a clear goal and determined upper management team. A five-phase, twelve-step process was developed by Robert C. Camp, Manager of Benchmarking Competency Quality and Customer Satisfaction at Xerox (18):
Xerox Twelve-Step Process
Phase 1: Planning
1. Identify what to benchmark
2. Identify comparative companies
3. Determine data collection method and collect data
Phase 2: Analysis
4. Determine current performance gap
5. Project future performance levels
Phase 3: Integration
6. Communicate findings and gain acceptance
7. Establish functional goals
Phase 4: Action
8. Develop action plans
9. Implement specific actions and monitor progress
10. Recalibrate benchmarks
Phase 5: Maturity
11. Attain leadership position
12. Fully integrate practices into processes
AT&T and Other Processes
Two-time, Baldrige Award winning AT&T, an active benchmarker, has developed a nine-step model (19):
AT&T Nine-Step Process
1. Identify what to benchmark
2. Develop a benchmarking plan
3. Choose data collection method
4. Collect data
5. Choose best-in-class companies
6. Collect data during a site visit
7. Compare processes, identify gaps, and develop recommendations
8. Implement recommendations
9. Recalibrate benchmarks
Other processes have been developed such as the Motorola five-step process and the seven-step process (20).
Motorola Five-Step Process
1. Decide what to benchmark
2. Find companies to benchmark
3. Gather data
4. Analyze data and integrate results into action plans
5. Recalibrate and recycle the process
1. Determine which function(s) to benchmark
2. Identify key performance variable to measure
3. Identify best-in-class companies
4. Measure performance of best-in-class companies
5. Measure your own performance
6. Specify programs and actions to meet and surpass
7. Implement and monitor results
SPI Five Phase Model
The model produced by the Strategic Planning Institute's (SPI) Council on Benchmarking produced the Simple Consensus Model summarizing the five phases of benchmarking in generic terms. This can then be mapped over the above-mentioned processes. These fives steps are (21):
3 Reach Out
These steps can then be "mapped" onto the Motorola Five-Step Process
as follows (22):
|1. Decide what to benchmark||x|
|2. Find companies to benchmark||x|
|3. Gather the data||x|
|4. Analyze data and integrate||x|
|5. Recalibrate and recycle the process||x|
Readiness is determining whether an organization is capable of starting and sustaining a benchmarking process. There are five broad categories for assessing an organization's readiness for best practices. These are (23):
1. Benchmarking readiness deals with matching the benchmarking organization
and its benchmark partners on various dimensions.
2. Culture readiness concerns the readiness of the benchmarking organization and its environment for importing best practice.
3. Implementation readiness covers activities that prepare the specific organizational entity and the benchmark practice itself for implementation in the new setting.
4. Operation readiness addresses the last and most enduring issues: those that monitor the status and insure the successful ongoing operation of the practice once it is in place
5. Technical readiness centers on the technical skills needed to conduct a benchmarking study and to import a best practice.
Various techniques are used to determine whether an organization is
ready. This can be accomplished by asking questions or a scoring
Benchmarking in the government is inherently different than in the private sector. This happens because of differences in goals and differences in how government relates to labor and the media.
Quality and Profit
Both the government and private sector strive to provide quality services at lower costs. The goal of government is not to produce a profit, but to reach some level of utility or benefit. This is both a plus and a minus. Since there is no competition, information and ideas are rarely held secret and are shared freely and casually most of the time. But, on the other hand, there is no defined measurement of success such as stock prices.
Labor, Media and Other Issues
As in all organizations, there is a resistance to change and a general lack of pressure for improvement. This can include organized labor, politicians, and other employees. In benchmarking, this can be countered by starting at the top and involving everyone from the bottom up, thus creating a team atmosphere dedicated to getting the job done.
The public sector is under constant scrutiny from the media, politicians, and citizens. Many feel that benchmarking is an expensive waste of money, resources, labor, and time or other matters are more pressing. In addition, failures can be very public and result in harsh criticism. Therefore, careful planning should be implemented and public input sought on all benchmarking projects to prevent confusion and waste at taxpayer expense. Also, benchmarking, if done properly, will save resources that can then be used for other matters without increasing taxes or fees.
A not-invented-here mentality is also hard to overcome. Many agencies
have difficulty accepting new ideas that have been implemented elsewhere.
There is a suspicion that what others do is not necessarily the same and
would therefore be ineffective or fail. A thorough understanding
of the case involved and communication with the organization and individuals
responsible is always important to understanding the process being studied.
The following is an abbreviated list of do’s and don’ts from Bogan and English (24):
• As a general rule, the process or function selected should be one
of the most critical to your business strategy.
• Projects should be well defined, and generally they should require less than a year to complete the research, analysis, action planning, and preliminary implementation.
• Management must support the project, provide adequate resources, and be prepared to champion implementation of the best practice findings.
• The organization must be willing to change.
• The team must understand who the customer is for the study. The customer’s expectations from the study are established by direct interaction.
• It is very useful to have an explicit mission statement which documents the project’s deliverables, purpose and metrics.
• Identify all that may be affected by the project and secure their ideas, contributions, and support.
• Consider implementation issues early in the planning phase and throughout the project.
• Don’t initially benchmark areas where the organization already performs
• Don’t benchmark topics or processes that aren’t important.
• Don’t benchmark processes that are so broad in scope, so poorly defined, or so poorly circumscribed, that the team cannot agree on its mission and cannot focus its efforts.
• Don’t undertake benchmarking projects with a team that is too large to be effective (10 or more) or too small to be credible (1 to 2).
• Don’t undertake complex process benchmarking efforts with team members that don’t understand the benchmarking process and don’t have access to and experienced benchmarking facilitator.
• Don’t benchmark unless all those affected by likely changes are represented on the benchmarking team or are given opportunity to contribute their ideas and interests to the benchmarking process.
Benchmarking is not limited to just succeeding and being recognized by peers, several awards are given to organizations. These include the Malcolm Baldrige National Quality Award (MBNQA), which opened to public organizations in 1997, the European Quality Award (EQA), the Deming Prize, and the Carl Bertelsmann Prize which is awarded to innovative municipalities. These awards focus on how an organization plans and executes its management based on quality, planning, and improvement.
Malcolm Baldrige National Quality Award
The Malcolm Baldrige National Quality Award (MBNQA) was established
by congress in 1987. The award was established to recognize U.S.
companies, and later government agencies, for outstanding business practices.
These practices are judged on seven categories, including Leadership, Information
and Analysis, Strategic Quality Planning, Human Resource Development and
Management, Management of Process Quality, Quality and Operational Results,
and Customer Focus and Satisfaction. Benchmarking was not included
in 1988, was added in 1989 at 80 points, but by 1993, benchmarking was
400 points of a possible total of 1000 points.
Mass Transit Railway Corporation (MTRC) in Hong Kong
MTRC carries millions of passengers daily and is one of the largest urban metros in the world but they benchmark. This is because “Top Management is committed to a policy of continuous improvement (25).”
Since 1993, MTRC has focused on three objectives (26):
• To develop a system that facilitates continuous improvement through
• To identify areas of excellence and make improvements to reach the level of best practice.
• To build a system that can be used to in public to demonstrate the value of its services to passengers.
MTRC then began to benchmark key processes. The key processes are (26):
• Identify the critical success areas of the business.
• Define key performance indicators of each critical success area.
• Submit data to the administrator.
• Consolidate benchmarking results.Identify the gaps of each performance indicator with the best performer.
• Conduct process benchmarking for high-priority improvement areas.
This process benchmarking is done through Community of Metros (CoMET), which includes mass transit systems from Mexico City, New York City, Paris, London, Moscow, Sao Paulo, Berlin, and Hong Kong. Each year all members gather uniform performance data to compare in semiannual meetings. Five key areas of interest are service quality, reliability, efficiency, asset utilization, and financial performance. These areas of interest led to the development of eighteen Key Performance Indicators (KPIs) from five categories including (27):
Categories Key Performance Inicators
Financial Performance 1) Total cost/passenger
2) Operations cost/passenger
3) Mainenance cost/revenue car operation km
4) Fare revenue/passenger
5) total commercial revenue/operations cost
6) Operations cost/revenue care operating km
7) Total cost/revenue car operating km
8) Passenger journey/total staff + contractor hours
9) Revenue capacity km/total staff + contractor hours
10) Revenue car km/total staff hours
11) Passenger km/capacity km
12) Capacity km/track km
13) Revenue car operating hours between incidents
14) Car operating hours/total hours delay
15) Trains on time/total trains
16) Revenue operating car km/total incidents
17) Total passenger hours delay/1000 passenger journeys
18) Passenger journeys on time/total passenger journeys
MTRC ranked at the top of half of the KPIs. Cliff Kong, organization and methods manager, says CoMET highlights the strengths and weaknesses of MTRC in various areas so the company can focus its improvement efforts (28).
To continue their success, MTRC set up task forces for potential improvement areas. MTRC also uses case studies and site visits to other metros. Then by studying others MTRC can compare itself to other metros. However, there are some drawbacks. Major changes are not achievable in the short term due to regulations and safety procedures.
MTRC has conducted several process benchmarking studies in the past, such as closed loop customer satisfaction process, supplier management/purchasing process, information technology and system function, asset management process, a safety case study, and a reliability case study (29). They did so with companies such as IBM, Xerox, American Express, Federal Express, Hong Kong Telecom, Chase Manhattan Bank, and Orient Overseas Container Line.
MTRC was able to net its biggest gains through benchmarking its suppliers. MTRC implemented eight different changes in the supplier purchasing process to improve this area and were able to reduce material supplier base by 40 percent (30). They were also able to save $16.5 million by means of alternative sourcing and $6 million by identifying and adopting a noise damping wheel for its electrical multiple units (31).
According to Andrew McCusker, operations engineering design manager,
“Benchmarking is not easy when you attempt to compare full-service delivery
and organizational performance. However, CoMET has been very successful
in doing just that. The set of measures can be taken on board by
each metro and used as an indicator of business proficiency over the long
term. It is dynamic and helps to keep conservative organizations
like railways keep moving (32).” He also adds, “By maintaining long-term
relationships with benchmarking partners and recalibrating the measures,
it will generate the opportunity for MTRC to win in competitive battles
This information is disseminated under the sponsorship of the United States Department of Transportation, Federal Transit Administration, in the interest of information exchange. The United States Government assumes no liability for the contents or use thereof. The United States Government does not endorse products or manufacturers. Trade or manufacturers' names appear herein solely because they are considered essential to the contents of these reports.
1. Hammer, M., and Stranton, S. The Reengineering Revolution: A Handbook. New York: HarperCollins, 1995, pp. 11.
2. United States Government. National Partnership for Reinventing Government (NPR).Washington: 2000,
3. Cohen, S., and Eimicke, W. Tools for Innovators. San Francisco: Josey-Bass, 1998, pp. 53.
4. Hammer, M. and Champy, J. Reengineering the Corporation: A Manifesto for Business Revolution. New York: HarperCollins, 1994, pp. 31.
5. National Center for Public Productivity, Rutgers University at Newark. A Brief Guide for Performance Measurement in Local Government. http://newark.rutgers.edu/~ncpp/cdgp/Manual.htm, 1997.
6. Ammons, D. Municipal Benchmarks: Assessing Local Performance
Community Standards, 2nd ed. Thousand Oaks, CA: Sage Publications, 2001, pp. 1-2.
7. National Center for Public Productivity, Rutgers University at Newark. A Brief Guide for Performance Measurement in Local Government. http://newark.rutgers.edu/~ncpp/cdgp/Manual.htm, 1997.
8. Bullivant, J.R.N. Benchmarking for Continuous Improvement in the Government Sector. Essex: Longman Group Limited, 1994, pp. 9.
9. Bullivant, J.R.N. Benchmarking for Continuous Improvement in the Government Sector. Essex: Longman Group Limited, 1994, pp. 10.
10. Zairi, M., and Leonard, P. Practical Benchmarking: A Complete Guide. London: Chapman & Hall, 1994, pp. 24.
11. Bogan, C.E. and English, M.J. Benchmarking for Best Practices. New York: McGraw-Hill, 1994, pp. 26.
12. Bogan, C.E. and English, M.J. Benchmarking for Best Practices. New York: McGraw-Hill, 1994, pp. 234.
13. Resch, T. and Selman, J.R. Benchmarking in the Federal Government: A Survey. U.S. Dept of Energy Office of Environmental Management. http://www.em.doe.gov/bch/survrpt.html, pp. 6.
14. Bogan, C.E. and English, M.J. Benchmarking for Best Practices.
New York: McGraw-Hill, 1994, pp. 237.
15. Keehley, P., Medlin, S., MacBride, S., and Longmire, L. Benchmarking for Best Practices in the Public Sector. San Francisco: Jossey-Bass, 1997, pp. 207.
16. Zairi M. Effective Management of Benchmarking Projects. Oxford: Butterworth-Heinemann, 1998.
17. Feltus, A. Exploding the Myths of Benchmarking. http://www.apqc.org/free/articles/dispArticle.cfm?ProductID=646, 1994.
18. Bogan, C.E. and English, M.J. Benchmarking for Best Practices. New York: McGraw-Hill, 1994, pp. 82.
19. Bogan, C.E. and English, M.J. Benchmarking for Best Practices. New York: McGraw-Hill, 1994, pp. 83.
20. Bogan, C.E. and English, M.J. Benchmarking for Best Practices. New York: McGraw-Hill, 1994, pp. 82.
21. Bogan, C.E. and English, M.J. Benchmarking for Best Practices. New York: McGraw-Hill, 1994, pp. 84.
22. Bogan, C.E. and English, M.J. Benchmarking for Best Practices. New York: McGraw-Hill, 1994, pp. 85.
23. Bogan, C.E. and English, M.J. Benchmarking for Best Practices. New York: McGraw-Hill, 1994, pp. 85.
24. Bogan, C.E. and English, M.J. Benchmarking for Best Practices. New York: McGraw-Hill, 1994, pp. 286-94.
25. Powers, V.J. “Benchmarking in Hong Kong: Mass Transit Railway Excels in Worldwide Industry Study.” Benchmarking in Practice, issue 11. Houston: American Productivity and Quality Center, 1998, pp. 1.
26. Powers, V.J. “Benchmarking in Hong Kong: Mass Transit Railway Excels in Worldwide Industry Study.” Benchmarking in Practice, issue 11. Houston: American Productivity and Quality Center, 1998, pp. 1.
27. Powers, V.J. “Benchmarking in Hong Kong: Mass Transit Railway Excels in Worldwide Industry Study.” Benchmarking in Practice, issue 11. Houston: American Productivity and Quality Center, 1998, pp. 1.
28. Powers, V.J. “Benchmarking in Hong Kong: Mass Transit Railway Excels in Worldwide Industry Study.” Benchmarking in Practice, issue 11. Houston: American Productivity and Quality Center, 1998, pp. 4.
29. Powers, V.J. “Benchmarking in Hong Kong: Mass Transit Railway Excels in Worldwide Industry Study.” Benchmarking in Practice, issue 11. Houston: American Productivity and Quality Center, 1998, pp. 5.
30. Powers, V.J. “Benchmarking in Hong Kong: Mass Transit Railway Excels in Worldwide Industry Study.” Benchmarking in Practice, issue 11. Houston: American Productivity and Quality Center, 1998, pp. 7.
31. Powers, V.J. “Benchmarking in Hong Kong: Mass Transit Railway Excels in Worldwide Industry Study.” Benchmarking in Practice, issue 11. Houston: American Productivity and Quality Center, 1998, pp. 7.
32. Powers, V.J. “Benchmarking in Hong Kong: Mass Transit Railway Excels in Worldwide Industry Study.” Benchmarking in Practice, issue 11. Houston: American Productivity and Quality Center, 1998, pp. 8.
33. Powers, V.J. “Benchmarking in Hong Kong: Mass Transit Railway Excels in Worldwide Industry Study.” Benchmarking in Practice, issue 11. Houston: American Productivity and Quality Center, 1998, pp. 9.