Friday, June 17, 2016

Project Delivery Process D606

D606 - Finalise Design for Functional Mods / Additions / Custom Reports

SIIPS Delivery Processes (D).png

DEFINITION

SIIPS D606.pngFinalise the design of the extraction, validation, control and input systems for data conversion.

SUMMARY

The purpose of this task is to finalise design and development of automated procedures to manage the conversion of data from the form in which it currently exists to the form required by the new application.

PATH PLANNING GUIDANCE

This process is optional and has been written for Application Software Implementations where existing data will be converted by automated methods.

DEPENDENCIES

Prerequisites (Finish-Finish):
•        Data Conversion Strategy  (D180)
Dependent procedures (Finish-Finish):
•        Instigate non-Application Software parallel tasks (D650) - where programming work will be outside the scope of the project team
•        Modify existing non-Application Software application software (D656) - for components of the work not using Application Software facilities.
•        Modify package software (D658) - for components of the work using Application Software facilities
Dependent procedures (Finish-Start):
•        Build of corresponding programs / modules etc

RECEIVABLES

•        Definition of Requirements (DoR)
•        Delivery Approach Definition (DAD) or similar definition of conceptual design
•        Technical Plan IP - see process D110
•        Data Model - see process D170
•        Data Conversion Strategy IP - see process D180.
•        Documentation for the other application(s) involved
•        Site standards and procedures
•        Business enterprise model
•        Business process specifications

DELIVERABLES

•        Conversion design - Technical Implementation Paper per conversion

TOOLS

•        Application Development Standards - Coding Standards
•        Application Development Standards - Naming Standards
•        Application Development Standards - File Definition Standards
•        Skeleton Implementation Paper
•        Guidelines: “Modelling Techniques”
•        Guidelines: “Custom Development”
•        IP Guidelines: “IP-Data Conversion”
•        IP Guidelines: “IP-Database”
•        IP Guidelines: “IP-Languages”
•        IP Guidelines: “IP-Handover”
•        Examples: “Interfaces and output files worksheet”
•        Examples: “Conversion Worksheet”
•        Guidelines: “Application Software development tools”

DETAILED DESCRIPTION OF TASKS

Requirements

The requirements for data conversion may be found in the earlier requirements documents, conceptual design (Delivery Approach Definition), the Data Conversion Strategy IP (see Process D180), or areas of specific functional design during the design/prototyping processes (see Process D400).  These needs should be confirmed and reviewed as appropriate to ensure that the final details are frozen for the build processes.  Note that prototyping work is often continuing at this time, so it can be important to check that the conversion requirements are stable before finalising the design.
Remember that old data is almost NEVER:
•        complete
•        of good consistent quality,
•        in the right format,
•        in the right sequence,
•        using the right (new) codes,
•        one old record for one new record,
•        without duplicates
•        readable directly without a new extraction program.
A large part of the requirement may be to extract, manipulate, validate, correct, sort and control the data.

Options

Full options should be considered, for example:
•        custom built programs
•        use of package facilities
•        use of interfacing tools
•        use of manual re-entry.
Consider and report on the relative merits of these approaches, particularly in terms of costs, benefits, lead times, resource requirements, quality and risk.  Remember that the conversion routines may only be used one time, and it may be legitimate to contemplate lower standards, lower quality and higher risks provided the resulting data load is reviewed for accuracy and manual corrections are allowed.

Recommendation

Agree the preferred course of action for each data conversion with the responsible managers in the client organisation (and/or external bodies where appropriate).
Detail of conversion design (where automated techniques are to be used)
Initial data conversions are one-time data loads into Application Software that do the initial data population of Application Software data from legacy systems.  Techniques may be identical to those used in permanent interfaces - see Process D602 for guidance on interface design options.
Data extraction may often be performed using features of the old system or defined in the same language as the old systems as it will be necessary to access the existing data, preferably using standard routines.   Validation, correction, sorting, reformatting, translation and control functions may be performed using whichever tools and techniques are most appropriate.  

Loading the converted data

Batch Input Program Processing Options
Run Mode
Processing Dispatched To
Runtime Limit
Speed
Use
Process / foreground
Dialog Process
approx 5 minutes (clock time)
Slowest
Use during development & debugging
Display errors only
Dialog Process
approx 5 minutes (clock time)
Next Fastest
Use during development
Background
Background Process
No limit. Jobs run until they finish.
Fastest
Use during development & unit test
Background Job
Background Process
No limit. Jobs run until they finish.
Fastest
Use in system test & production
The following performance improvement activities should be initiated:
•        Identify and verify Application Software configuration modifications to optimise performance of initial data conversions.
•        Verify server configuration modifications to optimise performance of initial data conversions. Note that modifications may be possible on a one-off basis, just while the conversion is being run.  Associated pros and cons should be considered before deciding to implement:
- Increasing memory cache
- Turning off logging during loads
- Pre-sorting data loads
- Dropping indexes before loads
- Enlarging the redo log buffers files
- Physically locating tables with many updates or inserts on separate disks

Batch Input Sessions

Application Software Batch Input automatically does the necessary validation and updates of data, but is slow.  To improve performance, especially on large loads, consider breaking large data feeds into parallel Batch Input Sessions to be run on different application servers if possible in order to increase throughput.  
The best performance will be achieved by running only one Batch Input Session on each application server.  Smaller performance gains can be achieved running multiple Batch Input Sessions on each application server.  In this scenario the number of update processes must be equal to or more than the number of Batch Input Sessions that will be running on the application server.
Running Batch Input Sessions in parallel is not applicable when the data feed being processed contains fields for which Application Software automatically creates serial values.  In this case, the data feed does not contain any data for the field in question so Application Software can automatically assign a serial numeric value to the field.

No comments:

Post a Comment