Tuesday, July 5, 2016

Project Delivery Process D810

D810 - Technical Testing and Tuning

SIIPS Delivery Processes (D).png

DEFINITION

SIIPS D810.pngPlan, prepare and conduct technical testing, tuning and volume testing.

SUMMARY

Many forms of technical testing, tuning, setting up and integration will be necessary.  Very often this work is “sub-contracted” to the organisation’s computer operations department.  In some cases it may also be appropriate to involve the hardware and/or package vendors in this work.
Technical aspects of the system must also be controlled and tested.  It is more common to suffer a major live problem due to  an error in the JCL or technical operations (eg backup/restore) than due to an error in the functional parameters.  Technical testing may cover:
  • systems programming, Job Control Language (JCL), design of processing suites etc
  • transfer of data between systems - controls, validations, timing, locks etc
  • database setup, file creation, placement, naming etc
  • operational procedures, run controls, output handling, job scheduling etc
  • backup and recovery procedures
  • disaster recovery procedures
  • network configuration, access paths, remote printer set up, etc
  • capacity to handle transaction volumes
  • capacity to handle data volumes.
Note - with Brief Delivery approaches, these tasks might be performed informally at the request of the client organisation - ie, the system might be tuned and checked for its ability to run operational volumes, but without detailed planning and controlled testing. As such work is not reviewed or signed-off, it should be made clear that adequacy of performance cannot be considered a responsibility of the project team.
The approach should be defined and agreed.  It will normally be documented in an Implementation Paper (or Brief Implementation Paper) - Technical Tuning and Testing.  The agreed testing will subsequently be prepared, conducted, reviewed and agreed as appropriate.
Each type of formal testing should be prepared in advance.  The project team should normally work with the user manager primarily responsible for the given area to ensure that the tests are acceptable.  Preparation would normally comprise:
  • definition and agreement of the objectives for the phase of testing
  • definition and agreement of the objectives for each test
  • detailed definition of each test,
  • expected results for each test as appropriate.
Tests will be performed in a controlled manner.  All incidents should be reported, logged and investigated.  If corrections are applied, the test must be repeated along with any other test that could have been affected by the changes applied.  Results will be reviewed by the responsible user manager and signed off.
Note that this process addresses technical testing - functional testing and user acceptance is covered in Process D800.  Several of the general ideas described in Process D800 will apply equally to this process.

PATH PLANNING GUIDANCE

Normal Practice

DEPENDENCIES

Prerequisites (Finish-Start):
  • installation of components
Prerequisites (Finish-Finish):
  • operational / technical design tasks
Dependent procedures (Finish-Start):
  • data load and live running

RECEIVABLES

  • Technical Plan IP
  • operational procedures (see Process D720)
  • other relevant Implementation Papers defining the system’s design

DELIVERABLES

  • Implementation Paper - Technical tuning and testing
  • Test Objectives
  • Test Definitions
  • Test Control Log
  • Test Incident Reports
  • Test Incident Control Log
  • Test Sign offs

TOOLS

  • Guidelines: Testing Standards and Procedures
  • Skeleton Deliverable: Test Objectives
  • Skeleton Deliverable: Test Definitions
  • Skeleton Deliverable: Test Control Log
  • Skeleton Deliverable: Test Incident Report
  • Skeleton Deliverable: Test Incident Control Log
  • Skeleton Deliverable: Test Sign offs )
  • System Test Signoff Letter
  • Test Conditions Worksheet

DETAILED DESCRIPTION OF TASKS

The Technical Tuning and Testing Implementation Paper

The overall approach to technical testing is considered in an environmental implementation paper - technical tuning and testing.  In a similar fashion to other implementation papers, it will review the requirements and options relating to technical testing, then state and justify a recommended approach.

Requirements for testing

The overall objective of all forms of testing is to prove that the system is suitable for live usage.  This would normally involve testing all reasonable aspects of expected usage of the system, including anticipated abnormal events such as user and data errors.  In terms of the effort involved, there is usually far more work required to cover the abnormal situations than the routine processing.
It is probably not possible, and certainly not reasonable, to test every single set of circumstances that can arise.  The testing needs to strike a reasonable balance between comprehensive coverage and risk.  The extent of coverage of the tests should be balanced against the risks involved - clearly the navigation systems on the space shuttle deserve more attention than a typing tutor program.  The magnitude and likelihood of the failure should be balanced against the costs and time required to perform comprehensive testing.
The following list gives examples of some of the test requirements that may be appropriate:
  • valid processing
  • recovery from failure of each program or module
  • empty files - ie no data passing through in a given run
  • empty reports
  • physical failure or corruption of files and databases
  • accesses to files, databases, programs and other resources address the correct versions (eg live system accidentally still addresses parameter database in the test environment)
  • running out of physical space for each file or database
  • recovering/reversing the system to a given backup
  • invalid data in files / invalid control totals
  • wrong version of files (eg day before yesterday’s carried forward file instead of yesterday’s)
  • reversing out interfaces
  • multiple runs of one system for a single run of an interfaced system
  • physical security / logical security / access security
  • protection from double update of data by two users simultaneously
  • acceptable response times per type of transaction
  • acceptable run times of batch processes / turnaround time for reports

Requirements for tuning the system

Before tuning, packages often perform worse than an organisation’s worst expectations.  The system will need technical tuning to ensure acceptable performance.  Volume testing validates that full volumes of data and transactions can be accommodated.  It also provides timing and workload statistics which can be used in the scheduling of work on the computer.
Testing should prove that normal loads can be sustained and peak loads can be accommodated.  Successive tuning and volume testing runs will probably be required to achieve satisfactory results.  These should be planned and allowed for in the overall scheduling.
The main aspects that need to be tuned and tested are:
  • physical size of files and databases,
  • real-time performance - average transaction times per type of transaction,
  • length of batch runs,
  • percentage of processor and other resources utilised - ie how much other work can be done while the system is running,
  • capability of the communications network to handle the loadings and peak concurrency,
  • efficiency of processes - achieving performance requirements without wasting resources.
It is often good practice to agree a “Service Level Agreement” with the MIS department.  This would “guarantee” service levels which end users can expect.  The tuning and volume testing will be important in the definition of service levels which are acceptable to both the users and the MIS operations department.

Options

Testing will normally be conducted in several phases comprising different types of testing.  Types of technical testing may include:
Type
Definition
Comments
Informal tests or prototyping
Testing elements of the technical set up as it is developed to check that it will work adequately.
Such tests are valuable but do not have formal testing controls applied to them - and are not therefore defined in this process.
Volume testing
Creating sufficient transactions and file sizes to simulate normal and peak work loads thus verifying that response times and processing times will be satisfactory, that file sizes are sufficiently large, and that the communications network can handle the loads.  This also gives firm indications of likely timings thus allowing effective run scheduling.
This testing is usually combined with the tuning of the system.   Thought should be given to how the system can be subjected to loads without unreasonable demands on the end users.  Mass generation of transactions and dummy data may be appropriate (the data does not need to be realistic - it just needs to simulate the predicted load on the system).
Operations testing
Testing of batch environment routines eg Job Control Language (JCL), job processing, system backup and recovery procedures etc
Must be performed.  Formal definition and control is recommended.  Should include failures of procedures, empty files, controls, interfaces with other systems etc.
Special stationery handling
Test operational use of special stationery, eg line up, line up routines, controls, recording of printed serial numbers (eg cheque numbers on cheque stationery)
Good practice - allow time to get stationery layout correct
Output handling and distribution
Correct handling of output distribution - routing, duplication, multi-part requirements,  controls, physical handling etc
Good practice where appropriate
Multiple access locks and duplicate updates
Check facilities for preventing (or dealing with) simultaneous access or update by more than one user to the same data items or resources.
Good practice - often the system works perfectly until it becomes heavily used.  Check for “deadly embrace” handling - ie two processes both waiting for the other to release a resource it is holding.
Log files, Checkpointing, duplicates and recovery
Check routines for taking checkpoint dumps of the system such that it can be restarted with a minimal loss of data.
Good practice where appropriate - may take time to test all the system’s inherent data recovery facilities
Security testing
Test that the system security and database security for the overall system and for each specific user is appropriate
Good practice where appropriate
Regression testing of bug fixes, upgrades
Testing that the system has not been affected in an unexpected way by any upgrades or bug fixes applied.
Any program changes must be tested out.  Try to keep the volume under control.  Beware the vendor who offers to fix a problem by putting in the next upgrade to the base software - this can mean restarting everything
Fallback testing
Tests the contingency plan for reverting to the old system in the event of a failure of the new one.
May be a wise precaution if the cutover plan allows a fallback contingency.
Disaster testing
Alternate processing in case of a system failure.
Ideally, a full dry run should be performed to make sure the procedures really do work.
Operational Acceptance Testing
Formal tests to satisfy the MIS operations department that the package-based system is of adequate quality to go into live production.
These tests are of great value if conducted sympathetically - ie with a view to implementing an adequate operational solution.  Beware, however, unreasonable demands from the MIS department.  Packages rarely meet all the internal standards imposed by the client organisation’s MIS department.

Options for tuning

The options for tuning a system will vary considerably according to the technical architecture.  Very often, specialised tools will be available for diagnosing the bottlenecks in the system and identifying how improvements could be made.  It is common to use specialist staff for this work.
Typical areas for optimisation include:
  • Buffer sizes in programs - ie how much spare memory is reserved to reduce the need to make physical accesses to magnetic media, or to smooth the waiting time for transfers to take place
  • Number and type of queues or “threads” - number of transactions that the computer will handle concurrently.  It may also be possible to tune which types  of queue handle which types of transaction and how much resource (eg priority / buffer space) they get.
  • Priorities - ie the relative amount of computer time that is available to different processes running concurrently, for example the package system may run slowly even though it has been given high priority if this reduces the power available to the database handling process which, therefore, becomes a bottleneck.
  • Job mix - the way in which different applications are run concurrently on the same computer.  There may be some combinations of job types which do not work well together, for example where they both make heavy use of the same file.
  • File placement - it may be possible to improve the speed of data transfers by placing key files on physically separate disks and controllers.  This tends to reduce head movement and allows both disks to be doing useful work simultaneously.
  • Block sizes - the size of data transferred at any one time - typically very large block sizes are good for serial transfers whereas small ones are better for random access.
  • Indexing methods and index buffers - it may be possible to tune the way in which randomly accessed data is indexed and how those indexes are held.  The best approach will often depend upon the equipment available.
  • Memory disks - data which is accessed constantly may be moved to virtual disks held on an electronic device such as a “RAM disk”.  These are very much faster (but more expensive) than ordinary rotating disks.  It may be necessary for the data to be backed up into a more permanent media in case of a system failure.
  • Buy more hardware - some improvements may require more hardware or better hardware, for example, more “RAM” main memory, faster disks, faster network links.
  • Process redesign - in some cases where a process has been defined in an inefficient manner, it may be appropriate to redesign the way the package is being used rather than to seek a technical solution.

Recommended approach

There is no advantage in repeating tests which have already been satisfactorily performed.  Accordingly, it is good practice to minimise the testing to as few tests and as few cases as possible, provided the defined requirements are met.
Many organisations would normally undertake the technical testing of the system without formal definition or control.  This is not to be recommended.  If, however, the client organisation insists on “doing things its own way” then it must be clear that the project team are not taking responsibility for any technical problems.

Detail of approach - Test Plan

The detail of the approach may be laid out as a test plan showing the main phases of testing and their timing.  Tests will then be specified in detail and agreed with appropriate staff within the client organisation.  The conduct and control of the tests should be follow  agreed standards and procedures.  The results should be accepted and signed off by a responsible member of the client organisation.  Where the technical tests are not performed by the project team itself, the project team should also review and approve the results to ensure the technical set up is adequate from the project’s point of view.

The general approach to defining, controlling, reviewing and signing off tests is described in detail in Process D800 and in the Guidelines document Testing Standards and Procedures.  These principles may also be applied to any formal aspect of technical testing.

No comments:

Post a Comment