QA Testing Terms

 QA Testing Terms- P
pair programming: A software development approach whereby lines of code (production and/or test) of a component are written by two programmers sitting at a single computer. This implicitly means ongoing real-time code reviews are performed.

pair testing: Two persons, e.g. two testers, a developer and a tester, or an end-user and a tester, working together to find defects. Typically, they share one computer and trade control of it while testing.

pairwise testing: A black box test design technique in which test cases are designed to execute all possible discrete combinations of each pair of input parameters. See also orthogonal array testing.

Pareto analysis: A statistical technique in decision making that is used for selection of a limited number of factors that produce significant overall effect. In terms of quality improvement, a large majority of problems (80%) are produced by a few key causes (20%).

partition testing: See equivalence partitioning.

pass: A test is deemed to pass if its actual result matches its expected result.

pass/fail criteria: Decision rules used to determine whether a test item (function) or feature has passed or failed a test.

path: A sequence of events, e.g. executable statements, of a component or system from an entry point to an exit point.

 

path coverage: The percentage of paths that have been exercised by a test suite. 100% path coverage implies 100% LCSAJ coverage.

 

path sensitizing: Choosing a set of input values to force the execution of a given path.

path testing: A white box test design technique in which test cases are designed to execute paths.

 

peer review: A review of a software work product by colleagues of the producer of the product for the purpose of identifying defects and improvements. Examples are inspection, technical review and walkthrough.

performance: The degree to which a system or component accomplishes its designated functions within given constraints regarding processing time and throughput rate.
performance indicator: A high level metric of effectiveness and/or efficiency used to guide and control progressive development, e.g. lead-time slip for software development.

performance profiling: Definition of user profiles in performance, load and/or stress testing. Profiles should reflect anticipated or actual usage based on an operational profile of a component or system, and hence the expected workload. See also load profile, operational profile.

performance testing: The process of testing to determine the performance of a software product. See also efficiency testing.

performance testing tool: A tool to support performance testing that usually has two main facilities: load generation and test transaction measurement. Load generation can simulate either multiple users or high volumes of input data. During execution, response time

measurements are taken from selected transactions and these are logged. Performance testing tools normally provide reports based on test logs and graphs of load against response times.

phase test plan: A test plan that typically addresses one test phase. See also test plan.

pointer: A data item that specifies the location of another data item; for example, a data item that specifies the address of the next employee record to be processed.

portability: The ease with which the software product can be transferred from one hardware or software environment to another.

portability testing: The process of testing to determine the portability of a software product.

postcondition: Environmental and state conditions that must be fulfilled after the execution of a test or test procedure.

 

post-execution comparison: Comparison of actual and expected results, performed after the software has finished running.

post-project meeting: See retrospective meeting.

precondition: Environmental and state conditions that must be fulfilled before the component or system can be executed with a particular test or test procedure.

predicted outcome: See expected result. pretest: See intake test.

priority: The level of (business) importance assigned to an item, e.g. defect.

probe effect: The effect on the component or system by the measurement instrument when the component or system is being measured, e.g. by a performance testing tool or monitor. For example performance may be slightly worse when performance testing tools are being used.

problem: See defect.

problem management: See defect management.

problem report: See defect report.

procedure testing: Testing aimed at ensuring that the component or system can operate in conjunction with new or existing users’ business procedures or operational procedures.

process: A set of interrelated activities, which transform inputs into outputs.

 

process assessment: A disciplined evaluation of an organization’s software processes against a reference model. [after ISO 15504]

process cycle test: A black box test design technique in which test cases are designed to execute business procedures and processes.

 

process improvement: A program of activities designed to improve the performance and maturity of the organization’s processes, and the result of such a program.

process model: A framework wherein processes of the same nature are classified into a overall model, e.g. a test improvement model.

product-based quality: A view of quality, wherein quality is based on a well-defined set of quality attributes. These attributes must be measured in an objective and quantitative way. Differences in the quality of products of the same type can be traced back to the way the specific quality attributes have been implemented.

product risk: A risk directly related to the test object. See also risk. production acceptance testing: See operational acceptance testing.

program instrumenter: See instrumenter.

program testing: See component testing.

project: A project is a unique set of coordinated and controlled activities with start and finish dates undertaken to achieve an objective conforming to specific requirements, including the constraints of time, cost and resources.

project retrospective: A structured way to capture lessons learned and to create specific action plans for improving on the next project or next project phase.

 

project risk: A risk related to management and control of the (test) project, e.g. lack of staffing, strict deadlines, changing requirements, etc. See also risk.

project test plan: See master test plan.

pseudo-random: A series which appears to be random but is in fact generated according to some prearranged sequence.

Comments