Software Testing Terminology- E
Efficiency testing: The process of testing to determine the efficiency of a software product.
EFQM (European Foundation for Quality Management) excellence model: A non-prescriptive framework for an organisation’s quality management system, defined and owned by the European Foundation for Quality Management, based on five ‘Enabling’ criteria (covering what an organisation does), and four ‘Results’ criteria (covering what an organisation achieves).
Elementary comparison testing: A black box test design technique in which test cases are designed to execute combinations of inputs using the concept of condition determination coverage. [TMap]
Emotional intelligence: The ability, capacity, and skill to identify, assess, and manage the emotions of one’s self, of others, and of groups.
Emulator: A device, computer program, or system that accepts the same inputs and produces the same outputs as a given system.
Entry criteria: The set of generic and specific conditions for permitting a process to go forward with a defined task, e.g. test phase. The purpose of entry criteria is to prevent a task from starting which would entail more (wasted) effort compared to the effort needed to remove the failed entry criteria.
Entry point:An executable statement or process step which defines a point at which a given process is intended to begin..
Equivalence class: See equivalence partition.
Equivalence partition: A portion of an input or output domain for which the behavior of a component or system is assumed to be the same, based on the specification.
Equivalence partition coverage: The percentage of equivalence partitions that have been exercised by a test suite.
Equivalence partitioning: A black box test design technique in which test cases are designed to execute representatives from equivalence partitions. In principle test cases are designed to cover each partition at least once.
Error: A human action that produces an incorrect result.
Error guessing: A test design technique where the experience of the tester is used to anticipate what defects might be present in the component or system under test as a result of errors made, and to design tests specifically to expose them.
Error seeding: See fault seeding.
Error seeding tool: See fault seeding tool.
Error tolerance: The ability of a system or component to continue normal operation despite the presence of erroneous inputs.
Establishing (IDEAL): The phase within the IDEAL model where the specifics of how an organization will reach its destination are planned. The establishing phase consists of the activities: set priorities, develop approach and plan actions. See also IDEAL.
Evaluation: See testing.
Exception handling: Behavior of a component or system in response to erroneous input, from either a human user or from another component or system, or to an internal failure.
Executable statement: A statement which, when compiled, is translated into object code, and which will be executed procedurally when the program is running and may perform an action on data.
Exercised: A program element is said to be exercised by a test case when the input value causes the execution of that element, such as a statement, decision, or other structural element.
Exhaustive testing: A test approach in which the test suite comprises all combinations of input values and preconditions.
Exit criteria: The set of generic and specific conditions, agreed upon with the stakeholders, for permitting a process to be officially completed. The purpose of exit criteria is to prevent a task from being considered completed when there are still outstanding parts of the task which have not been finished. Exit criteria are used to report against and to plan when to stop testing. [After Gilb and Graham]
Exit point: An executable statement or process step which defines a point at which a given process is intended to cease..
Expected outcome: See expected result.
Expected result: The behavior predicted by the specification, or another source, of the component or system under specified conditions.
Experience-based technique: See experience-based test design technique.
Experience-based test design technique: Procedure to derive and/or select test cases based on the tester’s experience, knowledge and intuition.
Exploratory testing: An informal test design technique where the tester actively controls the design of the tests as those tests are performed and uses information gained while testing to design new and better tests.
Extreme programming: A software engineering methodology used within agile software development whereby core practices are programming in pairs, doing extensive code review, unit testing of all code, and simplicity and clarity in code. See also agile software development.