software testing terms

 Manual Testing Terminology Part 1

Acceptance criteria: 
The exit criteria that a component or system must satisfy in order to be accepted by a user, customer, or other authorized entity.
Acceptance testing: 
Formal testing with respect to user needs, requirements, and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customers or other authorized entity to determine whether or not to accept the system.
Accessibility testing:
Testing to determine the ease by which users with disabilities can use a component or system

Accuracy: 

The capability of the software product to provide the right or agreed results or effects with the needed degree of precision.
Accuracy testing:
The process of testing to determine the accuracy of a software product

Acting (IDEAL): 

The phase within the IDEAL model where the improvements are developed, put into practice, and deployed across the organization. The acting phase consists of the activities: create solution, pilot/test solution, refine solution and implement solution.

Actual result:

 The behavior produced/observed when a component or system is tested.

Ad hoc testing: 

Testing carried out informally; no formal test preparation takes place, no recognized test design technique is used, there are no expectations for results and arbitrariness guides the test execution activity.
Adaptability: 

The capability of the software product to be adapted for different specified environments without applying actions or means other than those provided for this purpose for the software considered. [ISO 9126]

Agile manifesto:

A statement on the values that underpin agile software development. The values are:

–    individuals and interactions over processes and tools
–    working software over comprehensive documentation
–    customer collaboration over contract negotiation
–    responding to change over following a plan.

Agile software development: 

A group of software development methodologies based on iterative incremental development, where requirements and solutions evolve through collaboration between self-organizing cross-functional teams.

Agile testing: 

Testing practice for a project using agile methodologies, such as extreme programming (XP), treating development as the customer of testing and emphasizing the test-first design paradigm.

Alpha testing: 

Simulated or actual operational testing by potential users/customers or an independent test team at the developers’ site, but outside the development organization. Alpha testing is often employed for off-the-shelf software as a form of internal acceptance testing.

Analyzability:
The capability of the software product to be diagnosed for deficiencies or causes of failures in the software, or for the parts to be modified to be identified.

Anomaly: 

Any condition that deviates from expectation based on requirements specifications, design documents, user documents, standards, etc. or from someone’s perception or experience. Anomalies may be found during, but not limited to, reviewing, testing, analysis, compilation, or use of software products or applicable documentation.

Assessment report:
A document summarizing the assessment results, e.g. conclusions, recommendations and findings. See also process assessment.

Assessor:

A person who conducts an assessment; any member of an assessment team.

Attack:
Directed and focused attempt to evaluate the quality, especially reliability, of a test object by attempting to force specific failures to occur. See also negative testing.

Attractiveness:
The capability of the software product to be attractive to the user.

Audit: 

An independent evaluation of software products or processes to ascertain compliance to standards, guidelines, specifications, and/or procedures based on objective criteria, including documents that specify:

(1)    the form or content of the products to be produced
(2)    the process by which the products shall be produced
(3)    how compliance to standards or guidelines shall be measured. [IEEE 1028]

Audit trail: 

A path by which the original input to a process (e.g. data) can be traced back through the process, taking the process output as a starting point. This facilitates defect analysis and allows a process audit to be carried out.

Automated testware:
Testware used in automated testing, such as tool scripts.

Availability: 

The degree to which a component or system is operational and accessible when required for use. Often expressed as a percentage. [IEEE 610]

 

Comments