19.6
Software test and evaluation tasks (software
implementation stage)
1.
Prepare the software test environment
. The software test environment that will
be used to support software acceptance testing must be prepared. Special equip-
ment, test applications, and metric data collection and analysis tools may need
331
19.6
Software test and evaluation tasks
to be acquired or developed to provide the load, stress, scalability testing, soft-
ware product performance benchmarking, and regressions testing.
2.
Finalize the software acceptance test procedures
. Software acceptance test pro-
cedures should be finalized during software implementation in a timely manner
that supports dry-run of software product acceptance test procedures.
3.
Conduct dry-run testing
. The software test and evaluation team should execute
the acceptance test procedures on the software product executables. The motiva-
tions for dry-run testing include:
●
To gain experience with conducting the acceptance test procedures.
●
To identify and correct any defects with the test procedures.
●
To ensure that the computing environment is properly configured to support
testing.
●
To ensure that the software product will satisfactorily pass the acceptance
testing obstacle.
Members of the SWE-IPT should monitor the execution of each test to ensure
that the test procedures were followed and that the results were accurately cap-
tured and recorded.
4.
Generate software problem reports
. When a software test procedure does not gen-
erate the expected results, then a software problem report should be generated to
identify the problem and explain how it deviated from expected results. Software
problem reports that require a modification to the software architecture or TDP
should be identified as an ECR. ECRs must be resolved by the SWE-IPT to identify
the proper change to be implemented to resolve the architectural design problem.
5.
Revise the software test procedures
. The software test procedures should be
revised to correct any errors identified during the dry-run activity. It is possible
for a test failure to occur because the software test procedure was improperly
defined or its expected results were incorrectly postulated.
6.
Support ECR evaluations
. Test and evaluation representatives to the SWE-IPT
should participate in evaluating ECRs and determining the appropriate architec-
tural resolution.
7.
Conduct software quality assurance inspection and audits
. Software quality
inspections should be conducted periodically during the software implementa-
tion phase to assess the compliance with approved policies and procedures. The
following inspections should be conducted:
●
Inspection of the assimilation of change request and proposal resolutions.
●
Inspection of software problem report resolutions.
●
Inspection of the software development folders for software units and
components.
●
Inspection of software integration and test records.
●
Inspection of dry-run test records.
Software audits should be conducted prior to TRR to ensure the software documen-
tation provides a consistent, traceable framework for providing post-development
sustainment or incremental/evolutionary development. The software documenta-
tion and artifacts must be audited to ensure that they reflect the “as-tested” software
product configuration. The following audits should be conducted.
Do'stlaringiz bilan baham: |