S060 - Define and agree Selection Scoring Scheme
DEFINITION
Define and agree the approach that will be taken to the detailed evaluation of tenders.
SUMMARY
Application Software Implementation Projects do not demand a specific method for evaluating vendors’ responses. This is in recognition of the differing needs of projects which may vary according to the type of organisation, the size and complexity of the requirements, the type of application, systems architecture etc. Since the approach remains flexible, it is necessary at this stage to select an appropriate approach and agree it with the client organisation.
PATH PLANNING GUIDANCE
This process is optional. It is normal practice where scoring techniques will be used to give a balanced view of the competing solutions, although it may be trivial in nature if a standard approach is agreed without further investigation.
DEPENDENCIES
Prerequisites (Finish-Finish):
- (none)
Dependent procedures (Finish-Start):
- Prioritising requirements by weights (see S090)
- Scoring of responses (see S190)
RECEIVABLES
- (none)
DELIVERABLES
- Defined, agreed scoring scheme
TOOLS
- Examples: Structured Scoring Scheme
- Examples: Scoring Worksheet
- Examples: Request for Proposal / ITT
- Examples: Data for System Quality Charts
- Examples: SQ/VQ Charts - (Overall Average)
- Examples: Data for System Quality/Vendor Quality Charts
- Examples: System Requirements Worksheet
DETAILED DESCRIPTION OF TASKS
The purpose of this process is to fix the method that will be used to evaluate the vendors’ proposals. The method should be agreed with the client organisation. It may be appropriate to state and agree the method or it may warrant some discussion and consideration of the issues.
Note that the deliverable is an agreement about the method that will be employed NOT the actual detailing of weights or scores which will be performed in a later process.
Purpose of the scoring scheme
Most formalised selection approaches rely on some form of scoring. It is important to understand that the scoring process is only a tool to assist the client organisation to make the best choice between various solutions each with its own competing merits.
The purpose of scoring can be shown in the following list. Typical objectives for the scoring scheme might be:
- to provide a framework within which the many considerations differentiating between the solutions undergoing evaluation can be documented, reviewed, and compared, whilst being kept in context,
- to ensure that emphasis is placed fairly according to the organisation’s true needs,
- to prevent the exercise of personal bias and to demonstrate the completeness and fairness of the process,
- to highlight all significant differences between the competing solutions,
- to indicate general trends in the relative merits of the solutions, for example, areas which are particularly good or bad about a particular solution,
- to assist in the formulation of a recommended choice in principle and in the presentation of that recommendation.
BUT BEWARE the dangers of using the scoring scheme:
- The best solution may not get the overall highest score, for example, the highest scoring solution may have some critical weaknesses whereas another solution might provide reasonable fit to all aspects of the requirements.
- How many apples are equal to five pears? ... Comparisons cannot easily be made between different aspects of the evaluation, for example how do you choose between a functionally rich solution, a cheap solution and a solution where the supplier offers a high level of service?
- Unless there is a good way of balancing the different aspects of the evaluation, there is a danger that the more questions there are about a particular topic the larger the influence of that aspect in the overall result. For example, it is easy to ask 100 specific questions about the nature of the fields on a master file record, but there might only be a single question about a different vital aspect of functionality. Some mechanism must be used to demonstrate that the one question might be, in fact, far more significant than the 100 questions together.
- It is easy to believe in the numbers and lose emphasis on the true factors. Some people will read more truth into the numbers than actually exists, for example, some people believe that a certain score implies that modifications will be required to the package. This is nonsense - such pronouncements can only be made based on the actual facts - not based on the total scores for a wide range of questions.
- When the results do not look the way the team expected them, the numbers sometimes are changed. This suggests either that insufficient attention was paid to the original numbering or that the mechanism is being manipulated to distort the results.
A good scoring system is a tool for the team that helps them to identify and focus on the key issues. It should allow easy comparison of responses from different vendors to the same requirement. It should allow the comparison to be made for each topic or aspect of the requirements, both at the detailed level and at each level of summarisation. It should highlight areas where important needs cannot be fully met and it should give appropriate emphasis to all aspects of the comparison in a balanced manner.
Top Down
An important objective of the scoring scheme should be to maintain balance between the different aspects of the overall requirement. This is easiest to achieve by using a “top-down” approach.
In the top-down approach, you start with at the top with the overall needs and work downwards towards the specific individual requirements. The overall requirements are broken down into a number of sub-headings representing major areas of requirements, or other major areas of the assessment (eg vendor quality). The relative importance of each aspect is then considered and agreed. Where there are several applications involved in the overall requirements, the first stage of breakdown might be into the separate application areas plus some cross-application issues such as the quality of the vendor.
The relative importance may be shown as a number or as a simple code. The number is usually referred to as a “weight” as it will subsequently be used to add weight to the important aspects or to reduce the significance of the unimportant aspects during the scoring process. This is normally achieved by multiplying the score representing the adequacy of the vendor’s answer by the weight showing how significant this question or aspect is in the overall evaluation.
Unless the selection is simple, this process of breaking down the subject into successive topics would be repeated until the final detailed questions are reached.
It may be possible to use the top-down approach to split a total number of marks between the detailed questions. Given the large ratio between the number of detailed questions and the top-level breakdowns, it is usually more convenient to use a hierarchy of weights to split the scores level by level rather than to have low-level weighting factor s which correctly balance the importance of every single low-level question in comparison to every other one.
Bottom-up approach
The top-down approach is useful in reviewing and defining the relative importance of major aspects of the solution. It could be extended down to the low-level questions to give the final low-level relative importance weights. Although feasible, the splitting of a total score amongst the many low-level questions is not always easy, particularly if several people are involved in the process. It also means that standardised definitions for the meaning of the low-level weight cannot be stated.
It is often easier to agree a rule for awarding scores “bottom-up”, that is evaluating the importance of each low-level question using an agreed set of weights. It is possible to define any number of schemes based on the complexity of the situation and the desires of the client organisation. Generally speaking, it is better to have a simple system with a limited number of choices - the more choices the more time the team will spend debating the right choices.
An example of a simple low-level weighting system might be:
- 5 = very significant requirement without which the overall solution would not be satisfactory
- 3 = important requirement which the organisation definitely requires
- 1 = other genuine requirements that the organisation desires but which would not present any major problem if they could not be met.
With a simple structure like this, it might be possible to use codes rather than numbers to represent the “weights”. For convenience and to reduce the time and effort, this concept is sometimes combined with the concept of mandatory and desirable (or criticality - knockouts, business critical, or desirable). However, these are not exactly the same thing. The concept of “Mandatory” is used to eliminate solutions which fail to meet an absolute requirement. The weight is used to show how much significance should be given to how well the requirement is met - thus it is possible to have an absolute requirement for a requirement to be met with very little importance being attached to how well it is met - provided the minimum requirement can be met, eg the existence of an audit trail may be vital for legal reasons but of no real interest in weighing up the best solution.
Bottom up meets top down
The low-level weights calculated on the “bottom-up” basis need to be scaled in balance with their importance using the summary level weights calculated using the “top-down approach” This can either be multiplied through immediately, or the scaling can be done against the actual results thus retaining the meaning of the low-level weights on the spreadsheet. The resulting spreadsheet will show valid and meaningful comparisons at each level of summarisation and at the lowest level of detail.
Scoring vendors’ responses for compliance
In addition to defining how the different aspects of the requirements will be weighted against each other to ensure a fair overall view, it must be decided how the vendors’ responses should be marked for compliance with the defined requirements. Again, Application Software Implementation Projects do not defined a single fixed method for doing so as the best approach will depend on the specific circumstances.
As with weights, the best methods are relatively simple - the greater the range of answers the harder it can be to reach consensus within the project team. The basic need is to identify and record three basic situations. These are shown below along with an example of how they might be scored:
0
|
the response would not meet the requirement
|
2
|
the requirement could be met but not in an ideal way, eg manual processing required or modifications required
|
4
|
the response fully meets the requirement.
|
In addition to this, it can be helpful to identify the case where:
1
|
modifications are required to meet the requirement
|
5
|
the response fully meets the stated requirement and offers additional significant benefit above that considered to be the basic requirement.
|
Note that separate attention should be paid to items requiring modification as these are likely to have significant impacts in terms of timescales and costs.
Why use weights? Why use scores?
The simplest form of evaluation would be to use simple codes for weights and for the vendor’s subsequent compliance with each requirement. This would work the same way as the numeric system but remove the danger of believing the numbers have greater meaning than they do. The codes can either be left as codes allowing the reader to see the degree of compliance and forcing them to think about the overall implications, or they could be translated into numbers for summarisation.
This approach tends to place greater emphasis on the key differences but less emphasis on the overall quality of the solutions. It can lead to a more rapid selection decision provided the team understand that it is their role to draw the conclusions from the results.
Examples of scoring schemes
Many good scoring schemes have been devised and documented by Consultant project teams. There are several tools and examples illustrating some of these approaches.
A structured top-down/bottom-up scoring scheme where all figures are out of 100% is illustrated in Examples: Scoring Scheme. This weights the components of each level of summarisation out of 100%. For example:
Each of these lower levels is itself then broken down into components weighted out of a total of 100%:
At the lowest level, marks are awarded according to merit and then scaled to total 100% at the first level of summarisation. The percentage weights at each level are then used to build up comparisons at different levels of summarisation.
The marking system suggested in this example is the simplest possible:
0 (0%) = non-compliant
1/2 (50%) = requirement can be met but not in an ideal manner
1 (100%) = requirement fully met
The example (Examples: Scoring Scheme) contains a detailed demonstration of how the relative importance weights are multiplied by the compliance scores to give weighted scores per question, and how these are subtotaled and successively rolled up to give comparisons at the various levels of summarisation. Note that this approach is also demonstrated in the example scoring spreadsheet.
Aspect: SOP - Reports
|
Importance Weighting
|
Package A
|
Package B
| ||
Detailed Question
|
Compliance
|
Score
|
Compliance
|
Score
| |
Can the report writer access all databases simultaneously?
|
20
|
50
|
10
|
100
|
20
|
Can the report writer calculate column & row totals and subtotals?
|
30
|
100
|
30
|
100
|
30
|
Can the report writer use exponential functions?
|
10
|
50
|
5
|
50
|
5
|
Can the report writer be used simply by an end user?
|
40
|
100
|
40
|
50
|
20
|
Total Score for aspect
|
100
|
85
|
75
|
Aspect: Sales Order Processing
|
Importance Weighting
|
Package A
|
Package B
| ||
Sub-Aspect
|
Compliance
|
Score
|
Compliance
|
Score
| |
Master File
|
25
|
62
|
16
|
57
|
14
|
Transactions
|
25
|
71
|
18
|
95
|
24
|
Enquiries
|
30
|
83
|
25
|
91
|
27
|
Reports
|
20
|
85
|
17
|
75
|
15
|
Total Score for Aspect
|
100
|
76
|
80
|
For more detailed explanation of this example, see Examples: Scoring Scheme, and Examples: Scoring Spreadsheet.
Another example of top-down structuring is used in the Examples: SQ/VQ charts (detailed and summary levels) which show system quality and vendor quality charts along with presentation graphics for reporting the findings. These assume that the requirements and questions will have been structured into the headings used in the Example: Request For Proposal (RFP).
Deciding and documenting the scoring scheme
The main deliverable from this process is an agreed approach to scoring. It is not necessary to use a formal report for this purpose, although it can be helpful to distribute an explanation of the agreed approach so that all participants understand how it will work (see Examples: Scoring Scheme).
In most cases, the project leader will be able to identify a suitable existing approach and propose this to the client organisation without needing to resort to detailed research or consultation. The value in choosing an existing approach is that explanations, tools and examples will be available.
In very complex environments it may be necessary to consult a large number of people and hold workshops to agree the approach. In this case, it is recommended that an Implementation Paper be used to control and document the considerations and results. (But note that this is not normal practice and would lead to additional time and resource costs.)
No comments:
Post a Comment