S240 - Benchmarking
DEFINITION
A series of controlled trials seeking to demonstrate a proposed solution’s ability to meet the required functionality and/or workload.
SUMMARY
Benchmarking will normally only be applied to preferred solutions as there may be considerable expenditure or work involved. In a Benchmark, the actual requirements are simulated in a controlled manner to demonstrate that the proposed solution can meet those requirements. This is usually used to prove the capabilities that have been claimed by the vendor when there is some significant risk or doubt whether the proposed solution will, in fact, be adequate.
Most often, Benchmarking is used to measure a system’s ability to meet volume requirements, such as response times for a given size of user population, concurrent access and throughput rates. It may be used to confirm what size processor, data storage or network throughput is required.
In some other cases, Benchmarking is used to demonstrate that a proposed aspect of the functionality will meet the client organisation’s needs. For example, it may be used to show that the report writer can be configured to provide an unusual reporting requirement or that the system will integrate with a specified existing system.
The full proposed system, or the aspect to be studied, would be set up and appropriate tests defined. The tests will normally be conducted under controlled conditions, for example making sure that other computer systems running concurrently do not interfere with the performance of the proposed system.
PATH PLANNING GUIDANCE
This process is optional and is only required where there is a significant aspect requiring Benchmarking.
DEPENDENCIES
If required, this process is normally undertaken when a preferred solution has been identified but before the decision has been finalised.
Prerequisites (Finish-Finish):
- Agree final marks and issues (S230)
Dependent procedures (Finish-Finish):
- Prepare Selection Report (S250)
- Negotiate terms with preferred vendor(s) (S300)
RECEIVABLES
- selection issues list / log
- report from Package Fit tests (if any)
DELIVERABLES
- Benchmarking findings for the Selection Report
- Benchmark Report (optional)
TOOLS
- Skeleton Deliverables: Testing forms
DETAILED DESCRIPTION OF TASKS
The need for a benchmark
Benchmarks are not common practice during systems evaluations. In most cases the selection decision will be made on the basis of a theoretical evaluation of the competing solutions based on evidence drawn from a wide range of sources.
Benchmarking can be useful in validating or resolving a specific performance issue that is unique to the client organisation’s environment, including specific performance limitations, extensive enhancements, difficulties with prior implementations, or claims that are unsubstantiated. Due to its time-consuming nature and difficulty in replicating a specific problem or situation across multiple systems, Benchmarking should not be conducted arbitrarily. If system performance is a critical issue and Benchmarking is required, then this should have been noted in the Invitation to Tender.
There may be times when either:
- there is a significant risk factor in the overall project such that the chosen solution should be proven beyond all reasonable doubt before being finally selected (for example, a system to be replicated in thousands of locations worldwide and involving very high costs), or
- there are aspects of the preferred solution which represent risks or where there is some doubt over the vendor’s claims, such that it is worth testing these aspects.
In the first case, a Benchmarking exercise is likely to be planned into the project and included in the overall schedule given to prospective bidders. In the second case, it may well be that the decision to hold a benchmark is only taken after reviewing the proposals and considering the issues.
Benchmarking can represent large costs and time effort. It will often involve acquiring equipment and setting up the proposed solution, then building a “life-like” model of the final system and testing it with high-volume usage.
Given that this is performed before the selection decision is finalised, the exercise will also require co-operation and the investment of time, resources and expenditure from the vendor. Depending on the size and profitability of the potential sale, the vendor may be unwilling to undertake this work free of charge and it may be necessary to agree terms and conditions. (If so, make sure normal good practice is followed in respect of contractual negotiations.) With small systems, the vendor may even be unwilling to co-operate unless a contract is signed and all costs are paid.
Define the issues and the tests
The project team should identify the issues which need to be addressed in a Benchmark exercise. If there is no significant outstanding issue from the selection exercise, it may not be necessary to hold a Benchmarking exercise. The issues will normally have been identified and noted in the Selection Issues List. These issues may have arisen as a result of the:
- criticality of the requirements concerned,
- high costs and/or risks identified,
- concerns noted on assessing the original proposals and further correspondence from the vendors,
- concern noted in System Demonstration Questionnaires or visit reports,
- concern noted in Site Visit Questionnaires or visit reports,
- Package Fit Test reports,
- Application Comparison Worksheet,
- Hardware/Software Comparison Worksheet.
The specific objectives of the Benchmarking exercise should be identified to address these issues. These should be detailed such that they identify specific aspects of the testing to be performed. The Application Software Implementation testing forms may be created for this purpose .
The objectives will then be expanded to describe the specific tests to be performed, and their desired results.
A suitable environment will be required to run the tests. Some of the main options may include the following:
Option
|
Comments
|
Vendor’s equipment and/or premises
|
The vendor should be able to run the tests at their own premises using their own equipment (at their own expense).
This is often the cheapest option but, great care must be taken to see that the tests are conducted thoroughly and fairly. Vendors may not wish to give the required level of access to the project team to see that this is the case. Also, vendors may not always consider the time, expense and effort worthwhile if there is relatively little profit involved in the sale.
|
Third party site, eg Computer Bureau or existing user’s premises
|
It may be possible to conduct the tests at other locations, for example a computer bureau where the required software is already installed or an existing user’s own premises. It may be possible to suggest this when making reference site visits to existing users (see Process S220).
This will probably give the project team greater control over the tests and more complete results. It will probably be more expensive as the costs (eg hire charges, staff time, etc) are likely to be picked up by the project team. It may also be difficult to arrange temporary licences for any software components that are not already available at the third party location.
|
Own existing computer installation
|
If the client organisation already has the appropriate hardware and systems software with sufficient spare capacity to run the proposed system, it may be possible to run the benchmark tests on site. This will require a temporary “trial” licence from the vendor.
This provides a good test that the software will work on the proposed configuration and the initial installation work can often be re-used for the actual implementation of the solution, thus saving some effort later in the project. It is unusual, however, for sufficient capacity to be available to load a full-sized system on existing computer installations without needing to purchase and install additional hardware, eg more disk space, faster processor. If additional hardware is required, this may require considerable time and expenditure to be invested at an early stage. This may be particularly inappropriate if it is uncertain which type of hardware will be used for the eventual system.
|
Equipment supplied on a trial basis
|
Where the proposal includes the hardware, the vendor may be willing to supply it on a trial basis. If the proposal is accepted the equipment would become the actual system.
This approach may save time provided that the final solution is indeed accepted. Significant costs and time may be incurred in setting up the equipment and overall system. If this is done with the free assistance of the vendor it may be easy to achieve, but if the project team alone has to install the equipment, systems software and applications software, then significant time, costs and risks may be incurred.
|
Acquire hardware
|
In some cases, it may be appropriate to purchase and install the required hardware at this stage so that the tests may be performed on it.
This clearly involves time and expenditure at an early stage. The hardware configuration may unnecessarily become a constraint upon the selection process as a result. It is unlikely to be a good approach unless there is a very strong reason to select a particular form of hardware (eg compulsory IT Strategy requirement).
|
The environment will need to be set up according to the approach determined above. This will involve many of the tasks and considerations that apply to the full implementation of the system. If the vendor is willing to assist free-of-charge (or at an acceptable price) then this can be a significant benefit. For specific guidance, see the appropriate processes in the Delivery segment of Software Applicastion Implementation Processes. The considerations may include:
- site preparation, eg accommodation, air conditioning, power supplies, communications lines, etc,
- hardware installation, configuration, testing and acceptance,
- systems software installation and testing,
- staffing the new system,
- training for operations staff,
- training for project team staff in the configuration and use of the system,
- basic parameterisation of the package,
- basic testing of application.
The Benchmarking tests will be conducted in accordance with the definitions prepared to meet the overall objectives. Tests should be performed in a controlled manner following the specific defined detail of the test and noting any departure from the expected or desired results. Where timing or volume considerations exist, care must be taken to ensure that the tests are fair and realistic. For example, there should not be another major drain on the system’s processing capabilities running at the same time unless this mimics a real-life requirement of the system (eg it might be necessary to demonstrate that a proposed accounting system can run alongside the business critical real time customer information and sales systems even at month-end processing levels).
Tests should be controlled and logged to ensure that they have all been conducted and finalised. The Software Application Implementation Process Test Control Log may be used.
Any problems encountered should be logged and investigated (if appropriate). The tests may be repeated after remedial action has been taken. The Software Application Implementation Processes test incident control form and incident control logs may be used. Note that it may not always be appropriate to seek to remedy problems encountered. It may simply be a finding of the Benchmarking that the deficiency exists. Also, some tests may be intended to determine the capability of the system, eg how many transactions were processed in an hour.
In some cases, it may be appropriate to attempt to improve upon the initial results achieved. This may be appropriate, for example, where it takes some time to configure the system to achieve optimum efficiency.
Documenting the results
The formal documentation of the results will normally be presented as part of the overall Selection Report (see Process S250). If, for some reason, this is inappropriate, a separate “Benchmark Report” may be issued.
An initial draft of the results may be produced immediately and should be agreed with the parties concerned prior to preparation of the full report. In particular, any problems should be discussed with the vendor to ensure that they are genuine issues and to give the vendor a chance to find a solution.
No comments:
Post a Comment