Erhalten Sie Zugang zu diesem und mehr als 300000 Büchern ab EUR 5,99 monatlich.
This title is the sister book to the global best-seller Metrics for IT Service Management. Taking the basics steps described there, this new title describes the context within the ITIL 2011 Lifecycle approach. More than that it looks at the overall goal of metrics which is to achieve Value. The overall delivery of Business Value is driven by Corporate Strategy and Governance, from which Requirements are developed and Risks identified. These Requirements drive the design of Services, Processes and Metrics. Metrics are designed and metrics enable design as well as governing the delivery of value through the whole lifecycle. The book shows the reader how do achieve this Value objective by extending the ITIL Service Lifecycle approach to meet business requirements.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 247
Veröffentlichungsjahr: 2020
Das E-Book (TTS) können Sie hören im Abo „Legimi Premium” in Legimi-Apps auf:
Metrics for Service Management
Designing for ITIL®
Title:
Metrics for Service Management: Designing for ITIL
®
Author:
Peter Brooks
Editor:
Jane Chittenden
Publisher:
Van Haren Publishing, Zaltbommel, www.vanharen.net
ISBN hard copy:
978 90 8753 648 0
ISBN eBook:
978 90 8753 649 7
Print:
First edition, first impression, March 2012
Design and Layout:
CO2 Premedia bv, Amersfoort - NL
Copyright:
© Van Haren Publishing 2012
ITIL® is a Registered Trade Mark of the Cabinet Office in the United Kingdom and other countries.
For any further enquiries about Van Haren Publishing, please send an e-mail to: [email protected]
Although this publication has been composed with most care, neither Author nor Editor nor Publisher can accept any liability for damage caused by possible errors and/or incompleteness in this publication.
No part of this publication may be reproduced in any form by print, photo print, microfilm or any other means without written permission by the Publisher.
The Author of this title, Peter Brooks, wrote his first book for Van Haren Publishing in 2006. Titled ‘Metrics for IT Service Management’ and supported by many reviewers, it became a global best-seller, referenced in articles, conferences and operations across the world. It is as valid and popular today as it was then. The Publisher is extremely fortunate that Peter asked us to publish his follow-up piece based on the use of Metrics within the ITIL® V3 Lifecycle approach and their wider use within the business. The two titles complement each other extremely well and can be used together. The Publisher would like to thank Peter for his expertise, dedication, courtesy, good humor and finally for his friendship.
As well as thanking the reviewers for their invaluable service in improving the quality of the book, the Author would like to thank Annelise Savill for her excellent support and good humour throughout this project. He would also like to thank his wife, Verity, for her patience and help, particularly with the design of the main graphic in this publication.
Many colleagues and contributors helped to review and validate the content of this title. Our reviewers kindly spend many hours checking the facts and interpretation and they help to refine these works and improve on quality. Special thanks go out to the following who kindly spent valuable time checking this particular material:
Claire Agutter
IT Training Zone
Rob Benyon
Rhodes University, South Africa
Bart Van Brabant
Independent Consultant
Jacques A. Cazemier
Verdonck, Klooster & Associates,
Stéphane Cortina
Centre de Recherche Public Henri TUDOR
Suzanne Galletly
EXIN
Craig Hyland
TD Bank
Richard de Kock
Digiterra (Pty) Ltd
David Hinley
Independent Consultant
Alex Levinson
Tolkin NL
Michael Imhoff Nielsen
IBM Denmark Aps
HP Suen
Director PRISM and Director International Affairs Hong Kong
1 Introduction
1.1 Background knowledge
1.2 How to use this book
2 Managing, metrics and perspective
2.1 Managing
2.2 Perspective
2.3 Full metric description
2.4 Goals, Critical Success Factors (CSFs) and Key Performance Indicators (KPIs)
Section Break: Governance
3 Governance
3.1 Perspective
3.2 Metrics
3.3 Processes
Section Break: Service Strategy
4 Service Strategy
4.1 Perspective
4.2 Critical Success Factors
4.3 Metrics
4.4 Process metrics
Section Break: Service Design
5 Service Design
5.1 Perspective
5.2 Business Analysis or Requirements Engineering
5.3 Critical Success Factors - designing services
5.4 Metrics
5.5 Process metrics
6 Classifications of metrics
6.1 ITIL® metric structure
6.2 Six Sigma process metrics
6.3 COBIT capability, performance and control
6.4 Capability Maturity Model (CMMI)
6.5 Software process improvement and capability determination - SPICE ISO/IEC 15504
6.6 Goal, Question, Metrics (GQM)
6.7 Tudor’s IT Process Assessment (TIPA) framework
7 Outsourcing and emerging technologies
7.1 Outsourcing
7.2 Outsourcing case study
7.3 Virtualization, clouds, data centers, and green computing
7.4 Service Orientated Architecture (SOA)
8 Cultural and technical considerations
8.1 Organizational culture
8.2 Replacing metrics Ð messy reality vs. beautiful statistics
9 Tools and tool selection
9.1 Checklists
9.2 Measuring communications and meetings Ð Document Management System
9.3 Meeting management
9.4 Measuring project milestones and process activities
9.5 Surveys
Section Break: Service Transition
10 Service Transition
10.1 Perspective
10.2 Critical Success Factors
10.3 Metrics
10.4 Process metrics
11 Service Transition and the Management of Change
11.1 Staff development, satisfaction and morale
11.2 Employee development and training
11.3 Role definition SFIA
11.4 Professional recognition for IT Service Management (priSM®)
Section Break: Service Operation
12 Service Operation
12.1 Perspective
12.2 Critical Success Factors
12.3 Metrics
12.4 Function metrics
12.5 Process metrics
Section Break: Continual Service Improvement
13 Continual Service Improvement (CSI)
13.1 Critical Success Factors
13.2 Metrics
13.3 Process metrics
Appendices
A Naming and numbering of metrics
B Metrics Registry Ð example
C Bibliography
This book is designed to be practical; it avoids diagrams, process flows and detailed definitions where these are obvious. What managers need is a view of the goals and objectives of a project or program, and then an understanding of what methods, tools, resources, processes and so on are required to get it working. This book is primarily about design Ð the design of metrics for Service Management, which includes designing end-to-end service metrics. To measure services end-to-end, it is necessary to design process metrics, including Service Management process, technical and other supporting metrics.
Ideally the reader of this book will already be familiar with Service Management, ITIL®, and ISO/IEC 20000 as well, perhaps, as PRINCE2, M_o_R and, perhaps, ISO/IEC 15504 (SPICE), CMMI, Six Sigma, Cobit and other relevant areas. The book only includes the smallest possible top-level introduction to any of the above for those readers who might not be familiar with a particular area. Anybody intending to achieve a level of maturity in Service Management is advised to read the books recommended in the bibliography - in particular, the five ITIL® lifecycle books - and to take a structured approach to professional development.
If you have not worked with metrics before, then it would be worthwhile reading the first chapters of this book to avoid some of the more common and dangerous pitfalls. Even if you have worked with metrics, it is probably wise to review these, as mistakes can be subtle and difficult to rectify later. Designing metrics is not simple or quick Ð if it seems so, then the metrics are likely to be at best inadequate and, at worst, dangerous and counter-productive.
Many organizations have suffered from severe unexpected consequences - directly as a result of applying metrics that were easy to measure and control, but not actually in line with business requirements. A well known example was the use of one metric ‘waiting list time’ to define improvements to the UK National Health Service Ð the result was that everybody met the metric, but the actual waiting lists remained, or, in fact, became longer and less fair. The overall result was a reduced quality of service and increased dissatisfaction even as the metric was reported as a success.
Mostly this book is designed to be used as a practical tool during workshops:
• Where services, business and technical, with their processes, are designed.
• Where organizational improvement must be addressed urgently.
• Where a merged, or re-organized, service delivery team can decide what measures can enable results to be achieved quickly to support longer-term improvement and deviations measured accurately.
Use this as a tool, for guidance. If a suggestion suits you, use it. If you need to modify it for your own situation, go ahead; this is not supposed to be a stone tablet! If you’ve got a tricky issue to discuss, take it along, so you can explore some possible metrics Ð at least that should provide a common starting point for discussion and, maybe, some ideas for ways forward.
The layout is uncluttered, designed to be easy to navigate quickly Ð to find an idea, for example, during a meeting. Where possible, repetition is avoided.
Each metric described includes a paragraph giving some context. This is a reminder that metrics do not stand in isolation. Often this context will include warnings of possible misinterpretation, and suggestions for refining the metric. With any luck, in the heat of the moment, these will be some of the most helpful parts. They’re better read when actually designing a metric, rather than all the way through.
All metrics should include, for example, a RACI (Responsible, Accountable, Consulted, Informed) matrix to allow proper design of the metric to include the people accountable for it being achieved, those responsible for measuring and managing it and those consulted about its design, improvement or interpretation as well as those informed, through reports, dashboards, alerts or other means.
The Appendix contains the full form for recording a metric. Space does not permit the inclusion of all this detail for all metrics, so only the main descriptors of each metric appear in a table at the end of each chapter. The full set of electronic metrics is best used electronically, as an on-line Metrics Register that links to your Requirements, Continual Service Improvement (CSI) and Risk Registers and to the relevant Service Design Packages.
The flow of the book is contained in Figure 1.1 below. Notice that Design forms a major part of the book.
The book is organized as follows:
– Introduction (this chapter), explaining the purpose and structure of this book
– Managing, metrics and perspectives: key principles of metrics
– Governance: the metrics required for effective governance
– Service Strategy: the metrics required for the first phase of the service lifecycle
– Service Design: the metrics required for the second phase of the service lifecycle
– Chapters exploring Service Design-related topics in more detail:
ο Classifications of metrics
ο Outsourcing and emerging technologies
ο Cultural and technical considerations
ο Tools and tool selection
Figure 1.1 Metrics book topic flow
– Service Transition: the metrics required for the third phase of the service lifecycle
– Chapter exploring Service Transition-related topic in more detail: Service Transition and management of change
– Service Operation: the metrics required for the fourth phase of the service lifecycle
– Continual Service Improvement: the metrics required for the final ongoing phase of the service lifecycle
– Appendices]
The ultimate aim of Service Management is to produce Value; this is delivered during Service Operation and measurements facilitate in the definition and scoping of Continual Service Improvement. Corporate Strategy and Governance give rise to new requirements that drive the design of Services, Processes and Metrics. The design of metrics is critical to assuring the efficacy of the Service Lifecycle processes and in governing the delivery of value.
As with a lot of folklore, there are wise sayings on both sides of the question about how to use metrics as part of management:
‘You can’t manage what you can’t measure’ [attributed to Tom DeMarco developer of Structured Analysis]
‘A fool with a tool is still a fool’ [attributed to Grady Booch, developer of the Unified Modeling Language]
Both of these are true. Managing requires good decision-making and good decision-making requires good knowledge of what is to be decided. ITIL®’s concept of Knowledge Management is designed to avoid this pitfall.
Relying simply on numbers given by metrics, with no context or perspective, can be worse than having no information at all, apart from ‘gut feel’. Metrics must be properly designed, properly understood and properly collected, otherwise they can be very dangerous. Metrics must always be interpreted in terms of the context in which they are measured in order to give a perspective on what they are likely to mean.
To give an example: a Service Manager might find that the proportion of emergency changes to normal changes has doubled. With just that information, most people would agree that something has gone wrong – why are there suddenly so many more emergency changes? This could be correct, but here are some alternative explanations of why this is the case:
• If the change process is new, this may reflect the number of emergency changes that the organization actually requires more accurately. Previously these changes might have been handled as ordinary changes without proper recognition of the risk.
• In a mature organization, a major economic crisis might have intensified the risk of a number of previously low-risk activities. It would be the proper approach for the Service Manager, recognizing changes related to these, to make them emergency changes.
• The change management process might have been improved substantially in the current quarter, so much so that the number of ordinary changes that have been turned into standard changes has led to a halving of the number of normal changes. The number of emergency changes has stayed exactly the same, but the ratio is higher because of the tremendous improvement in the change process.
Even a very simple and apparently uncontroversial metric can mean very different things. As with most management, there is no ‘silver bullet’. Metrics must be properly understood, within context, in order to be useful tools. To ensure that they are understood, metrics must be designed. For best results, service metrics should be designed when the Service itself is designed, as part of the Service Design Package, which is why the ‘Design’ section in this book is the largest.
The metric template used in this book includes the field ‘Context’ specifically to allow each metric to be properly documented so that, when it is designed, the proper interpretation and possible problems with it can be taken into account. The design of a metric is not simply the measure and how it is taken; it must also make it clear how it will be reported and how management will be able to keep a perspective on what it means for the business - particularly its contribution to measuring value.
This is also a reason why the number of metrics deployed must be kept as small as possible (but not, as Einstein put it, ‘smaller’!). Metrics must also be designed to complement each other. In the example above, the ratio between emergency and normal changes is an important and useful one to measure, but it could be balanced by measuring the number of standard changes, the business criticality of changes and, perhaps, the cost of changes.
These would all help to embed the metric into a context that allows proper interpretation.
Metrics are needed not only to identify areas needing improvement, but also to guide the improvement activities. For this reason, metrics in this book are often not single numbers, but allow discrimination between, for example, Mean Time To Repair (MTTR) for Services, Components, Individuals and third parties – while also distinguishing between low priority incidents and (high priority) critical incidents. The headline rate shows overall whether things are improving, but these component measures make it possible to produce specific, directed improvement targets based on where or what is causing the issue.
Metrics are often used to measure, manage and reward individual performance. This has to be handled with great care. Individual contributions that are significant to the organization may be difficult to measure. Some organizations use time sheets to try to understand where staff are spending their time, and thus understand how their work is contributing to the value delivered. These tend to be highly flawed sources of information. Very few individuals see much value in filling in timesheets accurately, and those that do see them as useful find them inadequate records for capturing busy multi-tasking days.
There is a less subjective method – that of capturing the contribution of individuals and teams as documents in the Service Knowledge Management System (SKMS). For this to work, a good document management system with a sound audit trail is required, along with software that will identify what type of documents have been read, used (as in used as a template or used as a Change Model), updated (as a Checklist will be updated after a project or change review) or created (as in a new Service Design Package (SDP) or entry in the Service Catalogue). Each type of document update can be given a weight, reflecting the value to the organization (a new SDP that moves to the state ‘chartered’ is a major contribution, while an existing Request for Change (RFC) that is updated to add more information on the risk of the change would be a minor contribution).
Properly managed, such a scheme can give a very accurate and detailed picture of where in the Service Lifecycle work is being done, so missing areas (for example, maybe there are not enough Change Models being created) can be highlighted and the increased weighting communicated to the organization. If these measures are properly audited they can be used as incentives for inter-team competition as well as for finding the individuals worth singling out for recognition and reward. Being an objective system this form of reward, based on the actual contribution to value delivered, can be highly valued, even by very technical and senior staff, as well as being an incentive (and measure of progress) for new or junior staff.
In certain circumstances, external contracts particularly, penalty clauses may be required. Ideally these should be set so they are not triggered by minor deviations that can swiftly be remedied. Also, ideally, positive incentives should cover most of the relationship, with penalty clauses kept as a last resort. If penalty clauses are invoked frequently, then the business relationship is likely to, eventually, break down – before this happens, it would be wise to change supplier or have a fundamental reevaluation and renegotiation of the contract to avoid this situation.
Metrics can be understood to work from the top downwards. Business measures (such as profit, turnover, market share, share price, price/earnings ratio) are the ultimate measures of success and all other metrics should, ultimately, contribute to the success of these metrics. Service Management identifies services; some deliver business results directly, some contribute indirectly. These can be measured by Service Metrics. Business services and internal services often depend on processes for their proper operation, and these can be measured by Process Metrics. Services and Processes rely on the underlying technologies that deliver these, and these can be measured by technology processes. Ideally, the sequence is Business Measure <- Service Measure <- Process Measure <- Technology Measure. Some metrics have value outside this direct relationship, but, where possible, metrics should be evaluated for how well they contribute to this value chain.
For the above to work, metrics, of whatever sort, must be designed as an integrated part of the design of any Service, Process, or Technology.
Useful metrics are more than just measures. A well defined metric should also have, these attributes (Description, Dependencies, Data)
• Be under Change Control in the Metrics Register
• Have a name/ID
• Have a unique reference
• Have an owner
• Have a version number
• Have a category eg:
– Business Metric
– Service Metric
– Process Metric
– Technology Metric
• Show Status (with transition dates and times) eg:
– Created
– Design phase
– Approved/Rejected
– Chartered
– Testing
– Operational (Not Active, No Data, Green, Amber, Red, Retired)
– Retired
• Have secure access control
• Leave a clear audit trail
• Link to the Strategic, Tactical and/or Operational goals they support
– If a KPI link to the CSF
– Link to Business Objective
– If Service link to Service Level Agreement (SLA)/Operational Level Agreement (OLA)/Underpinning Contract (UC) and Service Design Package (SDP)
– If technical link to Operations Plan
• List the Requirements they measure
• List the relevant Stakeholders (RACI)
• Link to the process, procedure or activity they monitor
• Link to Test Plan
• Define the monetary value contributed
• Link to any Certification (eg ISO/IEC 20 000) they support
• Link to the relevant part of the Communication Plan(s) they serve
• Indicate the control loop operating them
– Thresholds
– Targets
– Monitoring schedule
– Norms
– Action when over threshold (automatic or manual)
• Link to where they are benchmarked
• Link to their Service Improvement Plan (SIP) in the CSI Register
• Have Description
• Have a Formula defining it
• Link to data gathered
• Link to alerts
• Link to any relevant Incidents, Problems, Changes or Releases
A possible layout is included in the Appendix, with examples of how this works. The book uses the ID, description and formula for metrics, rather the full detail, to save space.
A Critical Success Factor (CSF) is what it says it is – ‘Critical’. The easy test to see if something is a CSF is to ask: “If this doesn’t work, will the service have failed?” So to understand what the CSFs are, it’s important to establish what the goals are. Once CSFs have been established, then appropriate Key Performance Indicators (KPIs) must be developed to measure them – noting that one CSF may need a number of KPIs. Note that a KPI is not just a metric; it is a metric with a specific threshold value (the Indicator) above, or below, which the CSF is impacted.
For example: The goal of an automated teller machine (ATM) cash system is to dispense money. An ATM provides money and receipts. Availability of money is a CSF: an ATM that can’t give you money, the right amount, when you need it, is useless. A receipt is not a CSF because you can usually do without it.
You can usually say what your CSFs are (in the SDP) quite easily. Finding KPIs that measure them is a much more difficult job.
Thinking of the ATM example, and the CSF of providing money, here are some possible KPIs (all measured as an average percentage over all ATMs, and, for each ATM as a deviation from the average):
• percentage of time ATMs run out of money
• percentage of time an ATM customer gets the wrong amount of cash
• percentage of time an ATM customer gets forged banknotes
• percentage of time an ATM customer gets mugged within 15 minutes of drawing cash from an ATM
In each of the above cases, the ATM service may have failed the customer. In the first case, we don’t know how many customers have been let down. We could estimate it from the average number of customers expected at that ATM at that time of day, time of the month and time of the week (we’d need to measure all of these to make a reliable estimate). Or we could, perhaps, measure how many customers approach the ATM and turn away – we’d need a camera fitted to the ATM to measure that, and somebody to examine the footage.
In the fourth example, we might think it isn’t the business of the ATM service to provide security for customers for 15 minutes after they’ve drawn money – but measuring this would allow the bank to close ATMs in dangerous areas (where this measure is >1 per month, say), and/or install security cameras and guards to reduce the risk. These measures could enable the KPI to be revised downwards as well as the customer satisfaction to be increased
The point of this example is to show how to think of the interaction of CSFs and KPIs, to understand that there are likely to be a number of KPIs required to measure just one CSF
It’s also important to realize that CSFs are defined in the SDP. Part of the job of designing a service is to decide what KPIs need to be measured – and then designing means of measuring those KPIs. If you look at the ATM example above, the KPI about forged banknotes might be very difficult to measure, as customers may only discover that a banknote is forged quite some time later – in fact the forged banknote might travel through a few hands before anybody realizes that it is forged. In a real example, where a bank found that lots of banknotes of a particular denomination were being issued from their ATM System, they could only respond by withdrawing that denomination until a new, less easily forged, banknote was issued by the mint. This example would show that having some technology in an ATM that checked if banknotes were genuine would help satisfy the CSF of providing the right amount of money to the customer.
The board of directors is accountable for the governance of an organization, both corporate governance and corporate compliance. Service Management exists as a practice, in part to ensure that services operate according to these governance requirements and to account for compliance.
There is a two-way responsibility. The board, often through the Risk Committee, will communicate governance and compliance requirements and risks to the organization. The organization, including, specifically, Service Management, needs to alert the board, through the Risk Committee where appropriate, of potential risks to compliance or areas where further governance policy is required.
Governance, viewed as a process, can be measured; these measures can give the board confidence that it is governing appropriately and that the auditors will be satisfied with the level of compliance.
In the ‘big picture’ view, it is this confidence at board level that is the objective in measuring IT governance as part of Service Management.
In terms of the Management of Value, the job is to design metrics that define, and then measure, the Value Cascade. The Value Cascade goes from: Organization Goals → Program Objectives → Project Objectives → Value Drivers → Design Requirements → Design Solutions → Products.
ID
Name
Area
Process/Function
Unit
KPI/Metric
GPPFC01
Financial Plan
Governance
Financial Controls
% actions on time
Metric
Description
Is the Financial Plan up to date with meetings and action items on time?
Formula
% actions on time =(# Financial meetings on time) / (# Financial meetings) * 50+(# actions on time) / (# actions) * 50This requires meeting management to be in place, using a document management system and workflow for meeting management.
Context
Financial meetings should be arranged to plan for the annual budget, design financial reports, produce monthly reports, and report variations from the current budget.Actions measured should include actions to raise awareness and communicate costs, risks and other suggestions, relating to the implementation of Service Strategy, to the Risk Committee, Internal Audit and IT management.
ID
Name
Area
Process/Function
Unit
KPI/Metric
SSPFM02
Customer satisfaction with business alignment
Governance
Financial Management
