Contents of Software testing and Manual Testing.
Contents of Software testing and Manual Testing.
1. Basic of Software?
What is Software ?
It is collection of programs, procedure , instructions that perform some tasks on your a computer.
It is set of programmes which help the computer to function properly.
TYPE OF S/w: System S/w and Application S/w:
a) System S/w: It is set of programs to control and manage the operations of the computer h/w.
Why we required System s/w?
S/w required to run the h/w parts of computer and other application.
S/w is essential for a computer to work.
S/w control allocation and usage of diff h/w components.
E.g. OS, MAC OS, Linux.
b) Application S/w or Software Application: People use application s/w according to their needs.
Why we required Application s/w?
S/w help user to perform specific tasks.
S/w is executed when required.
S/w games Mario
E.g. MS Office , Antivirus. etc
1.1 Software application is same as Application S/w.
> Application software is end user s/w that is used /help to accomplish a variety of tasks.
e.g. Task like creating pdf file.
Word file to convert to pdf file.
1.2 Type of Applications:
Application is nothing but application program, applications programme(noun) a program that gives a computer instructions that provide the user with tools to accomplish a task.
a) Basic application: It is used in nearly every discipline and occupation.
E.g. MS Office
b) Specialized application : S/w that is specially designed for an individual or company's specific needs.
1.3 Application Domain knowledge.
What is Domain?
It is an area, IT industry point view it is project’s business area,
example: BFSI, ERP, ECommerce,Media,Healthcare, Telecom and Retail Market etc.
Domain Knowledge:
It is knowledge about a specific field of interest/subject. Considering a software Development case, domain knowledge is knowledge about the environment in which the target system operates.
e.g. Banking Domain:
A bank is a business; banks sell financial services such as Vehicle loans, home mortgage loans, business loans, checking accounts, credit card services, certificates of deposit, and individual retirement accounts etc…
1.4 Application Architecture?
An application architecture is a map of how an organization's software applications are assembled as part of its overarching enterprise architecture and how those applications interact with each other to meet business or user requirements.
E.g :
Core banking system payment application
Supply chain modules (Dealer and vendor) portal.
Branch user portal.
2. Development of Software Applications.
2.1 SDLC.(Software Development Life Cycle) Phases, Methodologies, Process, and Models as per ISTQB:
#1) Requirement Gathering and Analysis > looking Use case document (Actor > Action= Outcome)
BA + Dev Team
+ Tester.
#2) Design > Project Technical document developed by Architect + Developer.
#3) Implementation or Coding .> Source code (Programming lang Java, Vb.net) - Developer.
#4) Testing > Sign off and testing report - Tester.
#5) Deployment > Deliver Project installed at client location and Production / Live - Dev Team.
#6) Maintenance > Any new Prod defect need to fix - Dev Team + Tester
Note: But The 7 phases of Software Development Life Cycle are planning, requirements, design, development, testing, deployment, and maintenance depends diff organisation level.
What is Model means ?
> It is nothing but a best practice followed in an industry to solve their issues and problems.
2.2 SDLC MODELS.- Waterfall , Iterative , V-V model, Agile (Methodology):
Why we required any type of Model??
Because Proper planning of the project during integration and good co-ordination between the team may reduce project failure.
1) Waterfall model: It is the earliest SDLC approach that was used for software development.
The waterfall Model illustrates the software development process in a linear sequential flow.
This means that any phase in the development process begins only if the previous phase is complete.
Phased waterfall model:
In this model the project is divided into small chunks and delivered at intervals by diff team.
But main disadvantage : Improper planning may leads to fall of project during integration or any mismatched of co-ordinate between the team may cause huge failure.
Iterative model: was introduced because of problems faced in waterfall model with 2 sub set:
a) Incremental model: in this model project is divided into chunks like phase waterfall model but the diff
b) Spiral model: The Model uses series of prototype which refine on understanding of what we are actually going to deliver. Plans are changed if required as per refining of prototype.
So every time in this model refining of prototype is done and again the whole process cycle is repeated.
c) Evolutionary Model: In Increment and spiral model the main problem is for any changes in between SDLC cycle we need iterate the whole new cycle.
For eg. Final stage customer demands = new change we iterate the whole cycle again that means we need to update all previous (Req document, Tech document, source code, test plan)
In Evolution model : We divide s/w into small units which can be eariler delivered to customer's end which means we try to fulfill the customer's needs. In later stage we evolve the s/w with new customers needs there only.
This is an issue that arises for a multitude of reasons but two stand out issues are mistakes and problems during the SDLC.
1. Mistakes are caused by human error as after all Developers ARE human and are prone to errors.
2. Problems on the other hand are issues or a situation that is unfavorable that needs to be overcome and not always stem from errors. For example a problem with communication during the designing phase between client and developer leading to misaligned goals. In this we will go over some of commoems that developers are faced with during the SDLC.
2.4 SDLC phase and their challenges.
#1. Communication during Initial Phase. As I mentioned earlier, one of the biggest problem areas appear during the requirements gathering / defining stage and relates to communication problems between the involved parties.
#2. Management/Scheduling. (waiting for approval..)Work culture can lead to unfavorable management situations and inexperienced personnel are put in the role of project manager through leveraging relationships, or a simple case of misunderstanding of a person’s skills, and even budget limits play a role.
#3. Development and “Late Requests” ...Request for change in any stage instead of Requirement Gathering stage or Due to the initial problem of communication.
#4. Crunch time testing. Testing is key to ensuring that the program works as per the initial vision, and also nowadays to ensure all security measures/bugs are tested.
#5. Lacking feedback loops of problems.
2.3 Project Team Organization.
A Project Team is an organized group of people who are involved in performing shared/individual tasks of the project as well as achieving shared/individual goals and objectives for the purpose of accomplishing the project and producing its results.
> Director > Head
> CEO
> Department Head
> Program Head
> Project Manager
> Test Leads
> Sr . tester
> Tester
> Jr Tester
3. Introduction To Software testing
3.1 introduction
3.2 S/w testing task and participants.
Tasks done by Tester:
> Finding defects as early as possible
> Reporting defects
> Checking correction done for defects
Participants of testing :
> Get involved in testing at different phases of SDLC
> The purpose and type of testing they do, differ based on their role.
--------------------------------------------------------------------------------------------------------------------
>V-model represents different testing activities across SDLC
3.3.1 Verification.
What is Verification?
> A method of testing to “find defects as early as possible”
> Participants –All
> Objective- Make sure that the software application is getting developed in right way
"Are we doing the job right?"
> Process:
a) Systematically read the contents/work product of a software application.
b) Find issues/discrepancies
c) Get them solved
d) Also known as Static Testing as Software Application is not actually used or executed.
e) Method of verification-Walk Through, Review.
> Walkthrough:
a) As informal process, initiated by the author of a walkthrough document.
b) Done mainly with the objective of providing information and gathering suggestions.
c) Walkthrough Process
- Author explains the product.
- Colleague comes out with observations.
- Author provides clarification if required
- Author notes down relevant points and takes corrective actions.
> Review: A formal method –planned activity , use of well-defined process of fine defects.
Review Process :
a) Identify defects.
b) Document the findings.
c) Close defects by taking necessary action.
d) Confirm the defects closure through re-view.
e) Review Outcome is used further for process improvement.
> Tools for Verification:
1. Checklist-List of points containing guidelines and standard used to check the document or code.
2. Static code analysis tools
3. Compiler
3.3.2 Validation.
What is Validation?
Disciplined approach to evaluate whether the final, developed application fulfills its specific intended purpose
"Are we doing the right job?"
Done by using or executing the developed application.
Helps in identifying the presence of discrepancies, not their location.
Also known as Dynamic Testing as application is actually used (Testing by running executable).
3.4 V Models.
V shape model in software development:
In Waterfall model we are moving to next phase only if the first phase is completed fully.
Before that we can't initiate to next level of model.
If you find any error in any phase then we can't move ahead STOP.
So in V model means testing involve each and every stage or phase so to avoid delay and confusion.
Phase of each model.
Req Gathering <----------------------------> Acceptance Testing
System Analysis <--------------------> System Testing
S/w Designing <----------> Integration Testing
Module Design <----> Unit Testing
CODING
Unit Testing > or individual Module corresponding to Unit Testing input parameter done.> Developer perform this testing.
Integration Testing > Module sending right parameter to other module correctly or not /are they communicating very well. Developer and Tester perform this testing.
System Testing > whole system is working fine according to the requirement.-> Tester
Acceptance Testing > Req meeting the acceptance s/w project.
all mentioned points have been mentioned or not. UAT Tester and Tester
At Every stage, Test plan and test cases are created to verify and validate the product according to the requirement of that stage. For example, in requirement gathering stage the test team prepares all the test cases in correspondence to the requirements. Later, when the product is developed and is ready for the testing, test cases of this stage verify the s/w against its validity towards req at this stage.
This makes both verification and validation go in parallel. this model is also known as VV Model.
V Testing Concept:
It is the continuous testing throughout SDLC (Software Development Life Cycle).
Need to plan testing activities parallel with SDLC.
It helps in identifying needs of planning and design of testing in early stages of development process.
Left arm of ‘V ‘Model- It is the conventional waterfall model.
Right arm of ‘V’ Model- It corresponds to testing methodology required at each phase of development.
Development and Testing both get equal importance and it forces management for their commitment, attention and planning of required resources.
Output from the developer phases can be tested or reviewed by testing team.
V Model? Extension of waterfall model.?
Left arm corresponds to Waterfall Model.
Right arm corresponds to Testing Phases.
Each verification activities has its validation activities.
Major purpose of V Model is to meet business requirement and provide confidence before it is delivered.
3.5 Levels of testing.
3.5.1 Unit testing : It is a level of software testing where individual units/ components of a software are tested. A unit is the smallest testable part of any software. It usually has one or a few inputs and usually a single output. In procedural programming, a unit may be an individual program, function, procedure,etc.
3.5.2 Integration testing : It is a level of software testing where individual units are combined and tested as a group. The purpose of this level of testing is to expose faults in the interaction between integrated units.It occurs after unit testing and before validation testing
3.5.3 System testing: It is a level of software testing where a complete and integrated software is tested. The purpose of this test is to evaluate the system's compliance with the specified requirements. system testing: The process of testing an integrated system to verify that it meets specified requirements
3.5.4 UAT/ Acceptance Testing : Formal testing with respect to user needs, requirements and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the users, customers or other authorized entity to determine whether or not to accept the system.
Goal is to establish confidence in system, part of the system or specific non functional characteristics e.g Usability of the system.
Finding defect is not be the main focus.
System undergoes two stage of acceptance testing:
a) Alpha Testing : This is test can be done at the developer's site.
b) Beta Testing/ Field Testing: Sends the system to cross section of the users who install it and use it under real -world working conditions.
V model : Disadvantage:
Very rigid and least flexible.
Software is developed during the implementation phase, so no early prototypes of the software are produced.
If any changes happen in midway, then the test documents along with requirement documents has to be updated.
What is environment?
The surroundings or conditions in which a person, or a tester can perfrom testing.
Representative of the environment. SIT, UAT (CUG , Preprod) Database > Prod
User type : Dev , Tester, UAT Team, User or Stakeholder.
Different type of Environment level for testing:
SIT > QA Environment where tester check defect.
UAT > User Accpetance team > Testing bug > UAT team check on this environment.
PREPROD > Tester can check defect.
CUG > Close User Group (10 users can access -> PROD environment with live database)
PROD> Live Movement. Actual User can use the website.
3.6 Type of Documents?
FRD - Funtional Requirement document : A req that specifies a function that a component or system must perform.
BRD - Business req document : A req that specifies a business requirement same a FRD.
Release note - This document specifies the changes whether it is new changes or fixes in proper format.
CD- Change document / CR- Change Request : This is small change required to manage the project. document specifies the changes whether it is new changes or fixes in proper format.
FTC - Functional Test cases document : This document specifies functional test cases and Result also mentioned.
NFRD - Non Functional Req document: this req is does not related to functionality, but to attribute of such as realiablity, efficiency, usability, maintainability and portability.
SRD : Specification Req document:A req that specifies a function that a component or system must perform.
> Test Scenario document: Test Scenario gives a high-level idea of "what we need to be test?"
where as Test cases gives a idea of "how we need to be test?".
The terms 'test scenario' and 'test cases' are used interchangeably, however a test scenario has several steps, whereas a test case has a single step. Viewed from this perspective, test scenarios are test cases, but they include several test cases and the sequence that they should be executed. Apart from this, each test is dependent on the output from the previous test.
Example of Test Scenario:
For an eCommerce Application, a few test scenarios would be
Test Scenario 1: Check the Search Functionality (What)
Test cases : 1: Check if the blank value enter (How)
Test case : 2 : Check if invalid value enter
Test case : 3 : Check if valid value enter
Test Scenario 2: Check the Login Functionality
Test cases : 1: Check if invalid credential > expect : alert message
Test case : 2 : Check if valid credential > expect : Logged to user page.
Test case : 3 : Check if blank credential > expect : alert message .
Test Scenario 3: Check the Payments Functionality Assignment for you?
Test cases : 1: Check if invalid account no > expect : alert message (Please enter valid account no)
Test cases : 2: Check if valid account no > expect : User details should be displayed.
Test cases : 3: Check if account no is match with verified customer > expect : User details verified.
Test cases : 4: Check if account no is not match with verified customer > expect : User details not verified.
Test cases : 5: Check if amount value is valid > expect : User enter 1 rs. then system should display confirmation page.
Test cases : 6: Check if amount value is invalid > expect : User enter 0 or -1 rs. then system should display alert message (Please enter valid amount.)
Test cases : 7: Check if amount value is less than or equal to 20,000 > expect : User enter 20,000 then system should allow customer to do transaction.
Test cases : 8: Check if amount value is greater than to 20,000 > expect : User enter 20,001 then system should not allow customer to do transaction and alert message displayed.
Test cases : 9: Check if User transfer amount value before 6.30 pm > expect : system should allow customer to do transaction and success message displayed.
Test cases : 10: Check if User transfer amount value after 6.30 pm > expect : system should not allow customer to do transaction and alert message displayed (Transaction time out or Please try tom 9 - 6.30pm)
Test cases : 11: Check if User transfer amount successfully completed> expect : system should send success message on Customer mobile number with balance amount.
Test cases : 12: Check if User transfer amount not successfully completed> expect : system should send Transaction failed message on Customer mobile number with balance amount.
Test cases : 13: Check if User add customer mobile no> expect : system should allow to add customer mobile number and success message displayed.
Why do we write Test Scenario?
The main reason to write a test scenario is to verify the complete functionality of the software application end to end.
It also helps you to ensure that the business processes and flows are as per the functional requirements.
Test Scenarios can be approved by various stakeholders like Business Analyst, Developers, Customers to ensure the Application Under Test is thoroughly tested. It ensures that the software is working for the most common use cases.
They serve as a quick tool to determine the testing work effort and accordingly create a proposal for the client or organize the workforce.
They help determine the most critical end-to-end transactions or the real use of the software applications.
Once these Test Scenarios are finalized, test cases can be easily derived from the Test Scenarios.
> Traceability Matrix document: Traceability Matrix (also known as Requirement Traceability Matrix - RTM) is a table that is used to trace the requirements during the Software Development Life Cycle. It can be used for forward tracing (i.e. from Requirements to Design or Coding) or backward (i.e. from Coding to Requirements). There are many user-defined templates for RTM.
Each requirement in the RTM document is linked with its associated test case so that testing can be done as per the mentioned requirements. Furthermore, Bug ID is also included and linked with its associated requirements and test case. The main goals for this matrix are -
Make sure the software is developed as per the mentioned requirements.
Helps in finding the root cause of any bug.
Helps in tracing the developed documents during different phases of SDLC.
Example RTM: Table :
R1 > Test cases n Bug :
Login functionality check: R1
Gmail account login check: R2 > Test cases n Bug.
3.6.1 Types of Testing?
a) Testing of Function (Functional Testing:)
IT is a type of software testing whereby the system is tested against the functional requirements/specifications. Functions (or features) are tested by feeding them input and examining the output. Functional testing ensures that the requirements are properly satisfied by the application.
Based on ISO 9126 : Main five sub characteristic are "Suitability, Interoperability, Security, Accuracy and Compliance."
Unit Testing.
Component Testing.
Smoke Testing.
Integration Testing.
Regression Testing.
Sanity Testing.
System Testing.
User Acceptance Testing.
b) Testing of s/w product characteristics (Non-Functional Testing:)
IT is defined as a type of Software testing to check non-functional aspects (performance, usability, reliability,efficiency, maintainability, portability etc) of a software application. It is designed to test the readiness of a system as per nonfunctional parameters which are never addressed by functional testing.
Baseline testing.
Compliance testing.
Documentation testing.
Endurance testing.
Load testing.
Localization testing and Internationalization testing.
Performance testing.
Recovery testing.
Stress Testing.
Maintainability Testing
Portability testing
c) Testing of s/w structure/ architecture - (Structural Testing:) ->> White box/ glass box (inside box)
Topic will cover.
Soup UI Tool > XML >
Form Mobile ,User name
Account number:
Left xml:
Right Result:
d) Testing related to changes (Confirmation - Retesting and Regression Testing:)
New Changes and Defect Fixes:
Confirmation - Retesting Testing : When a test fails because of the defect then that defect is reported and a new version of the software is expected that has had the defect fixed. This is known as confirmation testing and also known as re-testing.
Regression Testing :
It is to ensure that changes have not affected unchanged part. Retesting is done to make sure that the tests cases which failed in last execution are passed after the defects are fixed. Regression testing is not carried out for specific defect fixes. Retesting is carried out based on the defect fixes.
OR
It is defined as a type of software testing to confirm that a recent program or code change has not adversely affected existing features. Regression Testing is nothing but a full or partial selection of already executed test cases which are re-executed to ensure existing functionalities work fine.
e.g A - bug fixed -> Confirmation or retesting.
B- exisitng module.
c- regression testing = A + B.
e) Maintenance Testing : Maintenance Testing is done on the already deployed software. The deployed software needs to be enhanced, changed or migrated to other hardware. The Testing done during this enhancement, change and migration cycle is known as maintenance testing.
3.6.2 Software Testing Life Cycle (STLC):
STLC is a sequence of different activities performed by the testing team to ensure the quality of the software or the product.
STLC is an integral part of Software Development Life Cycle (SDLC). But, STLC deals only with the testing phases.
Phases :
a) Requirement analysis > During this phase, test team studies the requirements from a testing point of view to identify the testable requirements.
The QA team may interact with various stakeholders (Client, Business Analyst, Technical Leads, System Architects etc) to understand the requirements in detail.
Requirements could be either Functional (defining what the software must do) or Non Functional (defining system performance /security availability )
b) Test Planning > Senior QA manager will determine effort and cost estimates for the project and would prepare and finalize the Test Plan. In this phase, Test Strategy is also determined.
c) Test cases Development > This phase involves the creation, verification and rework of test cases & test scripts. Test data, is identified/created and is reviewed and then reworked as well.
d) Environment Setup > Test environment decides the software and hardware conditions under which a work product is tested. Test environment set-up is one of the critical aspects of testing process and can be done in parallel with Test Case Development Stage. Test team may not be involved in this activity if the customer/development team provides the test environment in which case the test team is required to do a readiness check (smoke testing) of the given environment.
e) Test execution > During this phase, the testers will carry out the testing based on the test plans and the test cases prepared. Bugs will be reported back to the development team for correction and retesting will be performed.
f) Test cycle Closure > Testing team will meet, discuss and analyze testing artifacts to identify strategies that have to be implemented in the future, taking lessons from the current test cycle. The idea is to remove the process bottlenecks for future test cycles and share best practices for any similar projects in the future.
What is Entry and Exit Criteria?
Entry Criteria: Entry Criteria gives the prerequisite items that must be completed before testing can begin.
Exit Criteria: Exit Criteria defines the items that must be completed before testing can be concluded
3.6.2 What is PDCA?
PDCA: Plan> Do > Check > Act:
Plan : Define Scenarios , Conditions
Do : Write test cases
Check : Execute Test cases or Test
Act: Review Results
Accepting for Testing : Perform Sanity Test.
Verify : Validate fixes.
Access : Conduct Regression.
Promote Release : Client Acceptance.
Manage Change : Change Request.
4. Process of Software testing ? Very Importance:
Testing is a process rather than a single activity.Testing must be planned and it requires discipline to act upon it.The quality and effectiveness of software testing are primarily determined by the quality of the test processes used.
The activities of testing can be divided into the following basic steps:
>> Planning and Control
>> Analysis and Design
>> Implementation and Execution
>> Evaluating exit criteria and Reporting
>> Test Closure activities
a) Test Planning: Test planning involves producing a document that describes an overall approach and test objectives. It involves reviewing the test basis, identifying the test conditions based on analysis of test items, writing test cases and Designing the test environment. Completion or exit criteria must be specified so that we know when testing (at any stage) is complete
> To determine the scope and risks and identify the objectives of testing.
> To determine the required test resources like people, test environments etc.
> To schedule test analysis and design tasks, test implementation, execution and evaluation.
b) Test Control : This is the activity of comparing actual progress against the plan, and reporting the status, including deviations from the plan. It involves taking actions necessary to meet the mission and objectives of the project.
c) Analysis and Design : To review the test basis. The test basis is the information on which test cases are based, such as requirements, design specifications, product risk analysis, architecture and interfaces.
To identify test conditions
To design the testcase
To design the test environment set-up and identify the required infrastructure and tools
d) Implementation and Execution : Test execution involves actually running the specified test on a computer system either manually or by using an automated test tool.It is a Fundamental Test Process in which actual work is done.
Test implementation has the following major task:
To develop and prioritize test cases by using techniques and create test data for those tests.
To create test suites from the test cases for efficient test execution. Test suite is a collection of test cases that are used to test a software program
To re-execute the tests that previously failed in order to confirm a fix.
To log the outcome of the test execution. A test log is the status of the test case (pass/fail).
To compare actual results with expected results.
e) Evaluating exit criteria and Reporting: Evaluating exit criteria is a process defining when to stop testing. It depends on coverage of code, functionality or risk. Basically it also depends on business risk, cost and time and vary from project to project. Exit criteria come into picture,
when:
Maximum test cases are executed with certain pass percentage
Bug rate falls below certain level
When we achieve the deadlines
Evaluating exit criteria has the following major tasks:
To access if more test are needed or if the exit criteria specified should be changed
To write a test summary report for stakeholders
f) Test Closure activities: Test closure activities are done when software is ready to be delivered. The testing can be closed for the other reasons also like:
When a project is cancelled
When some target is achieved
When a maintenance release or update is done
Test closure activities have the following major tasks:
To check which planned deliverables are actually delivered and to ensure that all incident reports have been resolved.
To finalize and archive testware such as scripts, test environments, etc. for later reuse.
To handover the testware to the maintenance organization. They will give support to the software.
To evaluate how the testing went and learn lessons for future releases and projects.
4.1 STLC : S/w Testing Life cycle ?
Requirement Analysis.
Planning the test.
Developing the test case.
Setting up the test environment.
Execution of the test.
Closing the test cycle.
4.2 Testing team Organization
Program Manager
Project Manager:
> Test Manager / Testing lead:
Sr Tester
Jr Tester
Program Manager
Project Manager:
> Development Manager:
Developer/ programmer
4.3 What is Test Plan template ? Who will create -> Test Lead or Manager?
Who will do what task ,when and how ?
This includes the purpose of a Test Plan i.e scope, approach, resources, and schedule of the testing activities , entry and exit criteria.
TEST PLAN TEMPLATE is a detailed document that describes the test strategy, objectives, schedule, estimation and deliverable, and resources required for testing. Test Plan helps us determine test effort needed to validate the quality of the application under test.
The test plan serves as a blueprint to conduct software testing activities as a defined process which is minutely monitored and controlled by the test manager.
As per IEEE 829 Standard Test Plan Template:
1. Test Plan Identifier
2. Introduction
3. Test Items
4. Features to Be Tested
5. Features not to Be Tested
6. Approach
7. Item Pass/ Fail Criteria
8. Suspension and Resumption Criteria
9. Test Deliverables
10. Test task
11. Environment
12. roles and Responsibility
13. Staffing & Training need
14. schedule
15. Risk and contingencies
16. Approval
1. Test Plan Identifier: It identifies the project and may include version information. In some cases, companies might follow a convention for a test plan identifier. Test plan identifier also contains information of the test plan type.
2. Introduction : It is a brief summary of the product that is being tested.
3. Test Items : What are module name, menu name etc.
4. Features to Be Tested : What features of system or sub system will be testing?
5. Features not to Be Tested : What features of system or sub system will NOT be testing?
6. Approach: A test approach is the test strategy implementation of a project, defines how testing would be carried out. Test approach has two techniques: Proactive - An approach in which the test design process is initiated as early as possible in order to find and fix the defects before the build is created.
Reactive - An approach in which the testing is not started until after design and coding are completed.
will discuss more on this.
OR :
Proactive - An approach focuses on eliminating problems before they have a chance to appear.
Reactive approach is based on responding to events after they have happened.
The difference between these two approaches is the perspective each one provides in assessing actions and events.
7. Item Pass/ Fail Criteria : Specify the criteria that will be used to determine whether each test item (software/product) has passed or failed testing.
8. Suspension and Resumption Criteria:
Suspension criteria specify the criteria to be used to suspend all or a portion of the testing activities while resumption criteria specify when testing can resume after it has been suspended.
When a defect is introduced that cannot allow any further testing. (_BLOCKED)
OR :
Suspension Criteria: Any situation which impedes the ability to continue testing or value in performing testing lead to suspend testing activities.
Resumption Criteria: When the problem that caused the suspension had been resolved, testing activities can be resumed.
9. Test Deliverable : Test Deliverable are the test artifacts which are given to the stakeholders of a software project during the SDLC (Software Development Life Cycle). A software project which follows SDLC undergoes the different phases before delivering to the customer. In this process, there will be some deliverable in every phase.
10. Test task : List of Task 1 , Task 2 which will be discuss in the team and complete it.
11. Environment : A testing environment is a setup of software and hardware for the testing teams to execute test cases. In other words, it supports test execution with hardware, software and network configured. Test bed or test environment is configured as per the need of the Application Under Test.
12. Roles and Responsibilities:
S. Role Responsibilities
1 QA Manager Review test cases, review and approve the issues
2 Senior SQA Assigns tasks, tracks the testing progress
3 QA Prepare test cases, set up test environment
4 Tester Execute test cases, reports the issues
13. Staffing & Training need : check if required training or doubt clearance.
14. Schedule : Deadline date includes testing steps or tasks, the target start and end dates.
15. Risk and contingencies: “Risk are future uncertain events with a probability of occurrence and a potential for loss”. Risk identification and management are the main concerns in every software project. Schedule risks mainly affect on project and finally on company economy and may lead to project failure.
16.Approval : Approve the test plan. Now that you can verify that all requirements are covered by test cases, approve the test plan. Some test artifacts might require additional review or approval iterations, depending on the formal review process in your team.
Note:
Some of the other format vary from company to company:
1.0 INTRODUCTION
2.0 OBJECTIVES AND TASKS
2.1 Objectives
2.2 Tasks
3.0 SCOPE
4.0 Testing Strategy
4.1 Alpha Testing (Unit Testing)
4.2 System and Integration Testing
4.3 Performance and Stress Testing
4.4 User Acceptance Testing
4.5 Batch Testing
4.6 Automated Regression Testing
4.7 Beta Testing
5.0 Hardware Requirements
6.0 Environment Requirements
6.1 Main Frame
6.2 Workstation
7.0 Test Schedule
8.0 Control Procedures
9.0 Features to Be Tested
10.0 Features Not to Be Tested
11.0 Resources/Roles & Responsibilities
12.0 Schedules
13.0 Significantly Impacted Departments (SIDs)
14.0 Dependencies
15.0 Risks/Assumptions
16.0 Tools
17.0 Approvals
Sample Document show.
4.4 What Defect Document template format using?
In Excel sheet follow tab 1 for > Defect list and
tab 2> Snap/ screenshot defect wise
S.N:
CR NO:
Date:
Defect ID:
Defect Description:
Test Action/Steps :
Priority (Low, Medium, High):
Severity (Low, Medium, High) next session:
Status (Open, Reopen, closed):
Closed Date :
Defect Assigned To:
Defect Type (Defect or Observation)
QA comment :
Browsers :
Defect type : Defect and Observation
Defect can be fix by the developer
Observation required business suggestion or it may be deferred.
4.5 What Testcases template format using?
It should be Short, simple, clear, while writing TC think end user in Mind, do not assume take help of BA incase of any confusion, 100% coverage, avoid test case repeated.
Test case id identification and finally do peer review.
It describe test steps, expected results, actual results.
1. Test case ID : # 23
2. Test case Title : Login function
3. Description : To verify the login function
4. Pre condition ; QA URL
5. Priority - *avoid for tester instead of Priority we display Severity.
6. Req ID #\req121
7. Steps/ Action /
8. Expected result
9. Actual Result - avoid
10. Test data : credential and url
11. Result : Pass / fail
12. Module name. -> Menu name
13. Test Type- manual or automation
What is the Test Case?
A TEST CASE is a set of actions executed to verify a particular feature or functionality of your software application. A Test Case contains test steps, test data, precondition, postcondition developed for specific test scenario to verify any requirement. The test case includes specific variables or conditions, using which a testing engineer can compare expected and actual results to determine whether a software product is functioning as per the requirements of the customer.
Why do we write Test Cases?
Here, are some important reasons to create a Test Case-
Test cases help to verify conformance to applicable standards, guidelines and customer requirements
Helps you to validate expectations and customer requirements
Increased control, logic, and data flow coverage
You can simulate 'real' end user scenarios
Exposes errors or defects
When test cases are written for test execution, the test engineer's work will be organized better and simplified
4.6 What are S/w Testing tools?
> Selenium. Selenium is a testing framework to perform web application testing across various browsers and platforms like Windows, Mac, and Linux.
> Katalon Studio.
> TestingWhiz.
> HPE Unified Functional Testing (HP – UFT formerly QTP).
> TestComplete.
> Ranorex.
> Sahi.
> Watir.
> Tosca Testsuite.
> Silk Test.
> Squish.
> Appium.
> EggPlant.
4.7 Defect Management Tool ?
You can put this in another way "Better is the bug tracking tool, better the quality of the product". Here is the list of top bug tracking tool in software industries are
> BackLog
> SpiraTeam
> BugZilla
> Atlassian JIRA : IT is designed specifically as an agile software testing. The commercial testing tool is used widely by QA professionals for tracking environmental and project level issues in addition to bugs and defects in agile environments.
> Mantis
> RedMine
> Trac
> Axosoft
> HP ALM/ Quality Center
> eTraxis
> Bugnet
> FogBugz
> The Bug Genie
> Lighthouse
> Zoho bug tracker
> BugHost
> Collabtive
> Team Foundation Server
> IBM Rational ClearQuest
> Unfuddle
4.8 What are Methods of testing?
Black Box, White Box and Gray box testing.
What are Comparison of Testing Methods:
Black-Box Testing: Specifications based testing:
> The internal workings of an application need not be known.
> Also known as closed-box testing, data-driven testing, or functional testing.
> Performed by end-users and also by testers and developers.
> Testing is based on external expectations - Internal behavior of the application is unknown.
> It is least time-consuming.
> Not suited for algorithm testing. a+b = ?
> This can only be done by trial-and-error method.
White-Box Testing: Structural based testing:
> Tester has full knowledge of the internal workings of the application.
> Also known as clear-box testing, structural testing, or code-based testing.
> Normally done by testers and developers.
> Internal workings are fully known and the tester can design test data accordingly.
> The most exhaustive and time-consuming type of testing.
> Suited for algorithm testing.
> Data domains and internal boundaries can be better tested.
Grey-Box Testing:
> The tester has limited knowledge of the internal workings of the application.
> Also known as translucent testing, as the tester has limited knowledge of the insides of the application.
> Performed by end-users and also by testers and developers.
> Testing is done on the basis of high-level database diagrams and data flow diagrams.
> Partly time-consuming and exhaustive.
> Not suited for algorithm testing.
> Data domains and internal boundaries can be tested, if known.
Today's Assignment:
Quality Assurance,and Quality Control?
QA is a set of activities for ensuring quality in the processes by which products are developed.
QC is a set of activities for ensuring quality in products.
The activities focus on identifying defects in the actual products produced.
QA aims to prevent defects with a focus on the process used to make the product.
Proactive Testing and Reactive Testing?
> Proactive - An approach in which the test design process is initiated as early as possible in order to find and fix the defects before the build is created.
> Reactive - An approach in which the testing is not started until after design and coding are completed.
-----------------------------------------------------------------------------------------------------------------------
5. Test Design for Functional Testing.
5.1 Introduction.
5.2 Test Scenario (What to Test)?
The main reason to write a test scenario is to verify the complete functionality of the software application.
It also helps you to ensure that the business processes and flows are as per the functional requirements.
Test Scenarios can be approved by various stakeholders like Business Analyst, Developers, Customers to ensure the Application Under Test is thoroughly tested. It ensures that the software is working for the most common use cases.
They serve as a quick tool to determine the testing work effort and accordingly create a proposal for the client or organize the workforce.
They help determine the most critical end-to-end transactions or the real use of the software applications.
Once these Test Scenarios are finalized, test cases can be easily derived from the Test Scenarios.
5.3 Test case (How to test)?
What is the Test Case?
A TEST CASE is a set of actions executed to verify a particular feature or functionality of your software application. A Test Case contains test steps, test data, precondition, postcondition developed for specific test scenario to verify any requirement. The test case includes specific variables or conditions, using which a testing engineer can compare expected and actual results to determine whether a software product is functioning as per the requirements of the customer.
Why do we write Test Cases?
Here, are some important reasons to create a Test Case-
Test cases help to verify conformance to applicable standards, guidelines and customer requirements
Helps you to validate expectations and customer requirements
Increased control, logic, and data flow coverage
You can simulate 'real' end user scenarios
Exposes errors or defects
When test cases are written for test execution, the test engineer's work will be organized better and simplified
5.4 Test case reports :
Test case reports provide information about the status of test cases, test suites, or test scripts for a given scope.
Purpose: Test case reports answer the following questions:
What is the development status of test cases, test suites, or test scripts?
What are the counts of test cases, test scripts, and test suites by their states?
What are the TCER or test script coverage details of select test cases?
Which test cases fit selected categories by team area or test environment?
Which test cases have associated work items or are affected by defects?
BRD: we are getting below specification:
Test Conditions
Test cases
Test Procedure
Test Script
5.5 Test case Review
Test case review process is an important process to follow in software testing. Test case ensures that each and every functionality mentioned in Software Requirement Specification is covered. Test case should be effective and also follow the standards to write test case.
TEST CASE REVIEW CHECKLIST:
High level checklist of test cases review is as follows:
1. All the requirements mentioned in FRS are covered.
2. All negative scenario tests are covered.
3. Boundary Value Conditions are covered. i.e Tests covering lower/upper bounds are covered.
4. Data Validity tests are covered.
5. All the GUI related test cases (if mentioned in FRS) are covered.
6. To check is there any invalid Test case.
7. To check is there any redundancy - the state of being not in Test cases.
8. To check the Test case Priority.
9. To check Narration of Test case.
10. To check no major scenarios is missing in test cases.
11. Test step is written complete and understandable.
12. Clear Expected result is mentioned for each step
13. Checking for all text/ grammatical errors.
14. Length of test steps is appropriate or not.
15. Information related to setup of test environment, pre-requisties, what are the Pass/ Failed end condition
5.6 Test data?
At the current epic of Information and Technology revolutionary growth, the testers commonly experience extensive consumption of test data in the software testing life cycle.
The testers don’t only collect/maintain data from the existing sources, but also they generate huge volumes of test data to ensure their quality booming contribution in the delivery of the product for real-world use.
Therefore, we as testers must continuously explore, learn and apply the most efficient approaches for data collection, generation, maintenance, automation and comprehensive data management for any types of functional and non-functional testing.
Referring to a study conducted by IBM in 2016, searching, managing, maintaining, and generating test data encompass 30%-60% of the testers time. It is undeniable evidence that data preparation is a time-consuming phase of software testing.
Preparing proper input data is part of a test setup. Generally, testers call it a testbed preparation. In testbed, all software and hardware requirements are set using the predefined data values.
Login screen:
Test DATA:
User ID Username and PAssword xxxxxxx
How to Prepare Data that will Ensure Maximum Test Coverage?
Design your data considering the following categories:
1) No data: Run your test cases on blank or default data. See if proper error messages are generated.
2) Valid data set: Create it to check if the application is functioning as per requirements and valid input data is properly saved in database or files.
3) Invalid data set: Prepare invalid data set to check application behavior for negative values, alphanumeric string inputs.aplha numberic value.
4) Illegal data format: Make one data set of illegal data format. The system should not accept data in an invalid or illegal format. Also, check proper error messages are generated.
5) Boundary Condition dataset: Dataset containing out of range data. Identify application boundary cases and prepare data set that will cover lower as well as upper boundary conditions.
6) The Dataset for performance, load and stress testing: This data set should be large in volume.
Example : Test data:
5.8 Test data creation methods for Black box testing.
5.8.1 Equivalence class partitioning (ECP)?
ECP is a software testing technique that divides the input data of a software unit into partitions of equivalent data from which test cases can be derived. In principle, test cases are designed to cover each partition at least once.
In this method, the input domain data is divided into different equivalence data classes. This method is typically used to reduce the total number of test cases to a finite set of testable test cases, still covering maximum requirements.
In short, it is the process of taking all possible test cases and placing them into classes. One test value is picked from each class while testing.
For Example, If you are testing for an input box accepting numbers from 1 to 1000 then there is no use in writing thousand test cases for all 1000 valid input numbers plus other test cases for invalid data.
Using the Equivalence Partitioning method above test cases can be divided into three sets of input data called classes. Each test case is representative of a respective class.
So in the above example, we can divide our test cases into three equivalence classes of some valid and invalid inputs.
Test cases for input box accepting numbers between 1 and 1000 using Equivalence Partitioning:--
#1) One input data class with all valid inputs. Pick a single value from range 1 to 1000 as a valid test case. If you select other values between 1 and 1000 the result is going to be the same. So one test case for valid input data should be sufficient.
#2) Input data class with all values below the lower limit. I.e. any value below 1, as an invalid input data test case.
#3) Input data with any value greater than 1000 to represent the third invalid input class.
So using Equivalence Partitioning you have categorized all possible test cases into three classes. Test cases with other values from any class should give you the same result.
We have selected one representative from every input class to design our test cases. Test case values are selected in such a way that largest number of attributes of equivalence class can be exercised.
Equivalence Partitioning uses fewest test cases to cover maximum requirements.
5.8.2 Boundary Value Analysis (BVA)?
It's widely recognized that input values at the extreme ends of the input domain cause more errors in the system.
More application errors occur at the boundaries of the input domain.
‘Boundary Value Analysis' Testing technique is used to identify errors at boundaries rather than finding those that exist in the center of the input domain.
Boundary Value Analysis is the next part of Equivalence Partitioning for designing test cases where test cases are selected at the edges of the equivalence classes.
Test cases for input box accepting numbers between 1 and 1000 using "Boundary value analysis":--
#1) Test cases with test data exactly as the input boundaries of input domain i.e. values 1 and 1000 in our case.
#2) Test data with values just below the extreme edges of input domains i.e. values 0 and 999.
#3) Test data with values just above the extreme edges of the input domain i.e. values 2 and 1001.
Boundary Value Analysis is often called as a part of the Stress and Negative Testing.
Note: There is no hard-and-fast rule to test only one value from each equivalence class you created for input domains. You can select multiple valid and invalid values from each equivalence class according to your needs and previous judgments.
For Example, if you divided 1 to 1000 input values invalid data equivalence class, then you can select test case values like 1, 11, 100, 950, etc. Same case for other test cases having invalid data classes.
This should be a very basic and simple example to understand the Boundary Value Analysis and Equivalence Partitioning concept.
BANK :
Deposit : 100 rs >= 10 %
100 - 500 >=15%
500 >= 20%
Test data :
99
101
499
501
502
100
500
Deposit : 1 - 100 rs = 10 %
0
1
100
101
case 18 -60
EVP:
18
60
17
61
BVA:?
17
18
19
35
59
60
61
5.8.3 Error guessing?
Error Guessing is a Software Testing technique on guessing the error which can prevail in the code. It is an experience-based testing technique where the Test Analyst uses his/her experience to guess the problematic areas of the application. This technique necessarily requires skilled and experienced testers.
Error guessing is a testing technique that makes use of a tester's skill, intuition and experience in testing similar applications to identify defects that may not be easy to capture by the more formal techniques. It is usually done after more formal techniques are completed.
5.8.4 Negative Testing.
It is a method of testing an application or system that ensures that the plot of the application is according to the requirements and can handle the unwanted input and user behavior. Invalid data is inserted to compare the output against the given input.
Negative testing ensures that your application can gracefully handle invalid input or unexpected user behavior. For example, if a user tries to type a letter in a numeric field, the correct behavior in this case would be to display the “Incorrect data type, please enter a number” message.
6. Test execution & Defect Management.
6.1 Test case Execution:
Test execution is the process of executing the code and comparing the expected and actual results. Following factors are to be considered for a test execution process: Based on a risk, select a subset of test suite to be executed for this cycle. Assign the test cases in each test suite to testers for execution.
PASS
FAIL
BLOCKER
DEFFERED
6.2 Test Execution Cycles:
Test execution is the process of executing the code and comparing the expected and actual results. Following factors are to be considered for a test execution process: Based on a risk, select a subset of test suite to be executed for this cycle.
cYCLE 1
CYCLE 2
CYCLE 3
6.3 Smoke /sanity Testing:
Smoke testing means to verify (basic) that the implementations done in a build that are working fine.
example : check each one of the function.
New Mobile phone -
Camera
Network
Calling
Music
Songs
Internet speed
etc etc
Advantage:
This helps determine if the build is flawed as to make any further testing a waste of time and resources. The smoke tests qualify the build for further formal testing. The main aim of smoke testing is to detect early major issues. Smoke tests are designed to demonstrate system stability and conformance to requirements.Apr 29, 2020
Smoke Testing is ideally performed by the QA lead who decides based on the result as for whether to pass the build to the team for further testing or reject it.
The test cases for smoke testing can be either manual or automated or sometimes a hybrid approach.
Version to accept or not wheather all changes reflecting or not.
Sanity testing:
Sanity testing means to verify the newly added functionalities, bugs etc. are working fine:
Main whole functionality flow.
Sanity testing, a software testing technique performed by the test team for some basic tests. The aim of basic test is to be conducted whenever a new build is received for testing.
A sanity test is performed generally without test scripts or test cases but manually. The application developers or QA team perform this testing. The QA team usually performs sanity
Define:
Sanity testing is the surface level testing where QA engineer verifies that all the menus, functions, commands available in the product and project are working fine and not showing any error page
Example :
In a project there are five modules like login page, home page, user detail page, new user creation, and task creation etc.
Advantage :
Sanity testing is usually performed when any minor bug is fixed or when there is a small change in the functionality. It is a kind of software testing which is done by the testers to ensure that the functionality is working as expected. Sanity testing is narrow and deep
6.4 Retesting and regression Testing:
In Retesting the same defect is checked to make sure whether the defect is fixed or not using steps to reproduce mentioned in the defect.
In regression testing, the defect logged by tester while testing the software application is fixed by the developer.
6.5 Test closure Activities:
Test Closure is a document that gives a summary of all the tests conducted during the software development life cycle, it also gives a detailed analysis of the bugs removed and errors found . In other words, Test Closure is a memo that is prepared prior to formally completing the testing process.
> Checking for completion of tests. Here the Test Manager ensures that every test work has actually been completed.
> Handing over test objects. The relevant work products must be passed on to the relevant people. ...
Learning Experience. ...
Archiving.
When should stop testing?
Software testing can be stopped when the factors below are met: 100% requirements coverage is achieved and complied. Defects of all sorts are dealt with properly and resolved. All tests must be passed at least 95%
When all test cases completed or all when test coverage completed.
What Test Summary Report?
This section includes the summary of testing activity in general. Information detailed here includes
The number of test cases executed
The numbers of test cases pass
The numbers of test cases fail
Pass percentage
Fail percentage
Comments
Total defect counts:
Defect - Open
Defect - Closed
Defect - Reopen
Defect - Deferred
What is Test Report?
Report = Detail + Clear + Standard + Specific.
> Detail: You should provide a detailed description of the testing activity, show which testing you have performed. Do not put the abstract information into the report, because the reader will not understand what you said.
> Clear: All information in the test report should be short and clearly understandable.
> Standard: The Test Report should follow the standard template. It is easy for stakeholder to review and ensure the consistency between test reports in many projects.
> Specific: Do not write an essay about the project activity. Describe and summarize the test result specification and focus on the main point.
For example, to correct the above Test Report, the tester should provide more information such as:
Project information: XYZ
Test cycle: (System Test, Integration Test...etc.)
Which functions have already tested (% TCs executed, % TCs passed or fail…)
Defect report (Defect description, Priority or status...)
7. Defect management
7.1 Defect.
A mistake in coding is called Error,
Error found by tester is called Defect,
Defect accepted by development team then it is called Bug,
Build does not meet the requirements then it Is Failure.”
A defect is a non-conformance to a requirement. A failure is a defect that reaches the customer.
A bug is a fault in a program which causes it to behave abruptly. Bugs are usually found either during unit testing done by developer of module testing by testers.
A defect is found when the application does not conform to the requirement specification. A defect can also be found when the client or user is testing.
7.2 Defect reporting:
DEFECT REPORT is a document that identifies and describes a defect detected by a tester. The purpose of a defect report is to state the problem as clearly as possible so that developers can replicate the defect easily and fix it.
7.3 Defect logging:
Defect logging, a process of finding defects in the application under test or product by testing or recording feedback from customers and making new versions of the product that fix the defects or the clients feedback.
7.4 Defect Life Cycle.:
DEFECT LIFE CYCLE, also known as Bug Life Cycle, is the journey of a defect from its identification to its closure. The Life Cycle varies from organization to organization and is governed by the software testing process the organization or project follows and/or the Defect tracking tool being used.
OPEN\NEW
Fixed
Closed
Reopen
Deferred
7.5 Defect Tracking:
Defect Reporting And Tracking. A Defect is a non-conformance of Deliverable to Specification. Defects can be found by testers, project team members or you. ... Test Plan leads testers through product functionality and helps them to reveal the most critical Defects.
Defect Tools :
QC HP ALM
JIRA
BUGZILA|
What is JIRA ?
JIRA:
Jira Software is an agile project management tool that supports any agile methodology, be it scrum, kanban, or your own unique flavor.
From agile dashboards to reports, you can plan, track, and manage all your agile software development projects from a single tool.
JIRA is a project management tool and uses issues to track all the tasks. An issue helps to track all works that underlie in a project. In real time, every work or task either technical, non-technical, support or any other type of a project in JIRA are logged as an issue.
An Problem/issue can be dependent on the organization and requirements -
Story of a project
Task of a story
Sub-task of a story
A defect or bug can be an issue
Helpdesk Ticket can be logged as issue
Leave Request
8. Other Testing Types.
8.1 Database testing
It is a type of software testing that checks the schema, tables, triggers, etc. of the database under test. It also checks data integrity and consistency. It may involve creating complex queries to load/stress test the database and check its responsiveness.
Accessibility Testing.
It is defined as a type of Software Testing performed to ensure that the application being tested is usable by people with disabilities like hearing, color blindness, old age and other disadvantaged groups. It is a subset of Usability Testing.
Functional Testing: Functional testing is done based on the business requirement.
Non Functional Testing:
It is done based on the customer expectation and Performance requirement.
Non Functional testing solely focuses on the good quality of the software especially the nonfunctional aspects such as response time, security, scalability, usability, performance etc.
It is defined as a type of Software testing to check non-functional aspects (performance, load , Stress, usability, reliability, maintainability and portability etc) of a software application. It is designed to test the readiness of a system as per nonfunctional parameters which are never addressed by functional testing.
8.1 Usability Testing.
It is the practice of testing "how easy a design is to use on a group of representative users". It usually involves observing users as they attempt to complete tasks and can be done for different types of designs, from user interfaces to physical products.
8.2 UI Testing.
Also known as GUI testing, UI testing is the process of testing the visual elements of an application to validate whether they accurately meet the expected performance and functionality. By testing the GUI, testers can validate that UI functions are free from defects.
8.3 Localization Testing.
It is a software testing technique, that checks that the software behaves according to the local culture or settings. The purpose of doing localization testing is to check appropriate linguistic and cultural aspects for a particular locale.
8.4 Performance Testing.
Software Performance testing is type of testing perform to determine the performance of system to major the measure, validate or verify quality attributes of the system like "responsiveness, Speed, Scalability, Stability under variety of load conditions."
8.5 Load testing.
It is a type of non-functional testing. A load test is type of software testing which is "conducted to understand the behavior of the application under a specific expected load." "Load testing is performed to determine how a system's behavior under both normal and at peak conditions."
8.6 Stress Testing.
It is a Non-Functional testing technique that is performed as part of performance testing. During stress testing, the system is monitored after subjecting the system to "overload" to ensure that the system can sustain the stress.
8.7 RELIABILITY TESTING is a software testing type, that checks whether the software can perform a failure-free operation for a specified period of time in a particular environment. ... Reliability testing in software assures that the product is fault free and is reliable for its intended purpose.
8.8 Maintainability testing is the process of testing the system's ability to update, modify the application if required. This is very important part as the system is subjected to changes all through the software life cycle. Once the system is deployed to production environment, the software requires maintenance
8.9 Portability testing is the process of determining the degree of ease or difficulty to which a software component or application can be effectively and efficiently transferred from one hardware, software or other operational or usage environment to another.
8.10 Endurance Testing.
It is a non functional type of software testing. It is also known as Soak testing. It involves testing a system with a "significant load extended over a significant period of time," to discover how the system behaves under sustained use.
8.11 Volume testing.
It is a type of Software Testing, where the software is subjected to a "huge volume of data." It is also referred to as flood testing. Volume testing is done to analyze the system performance by increasing the volume of data in the database.
8.12 Installation Testing.
Most software systems have installation procedures that are needed before they can be used for their main purpose. Testing these procedures to achieve an installed software system that may be used is known as installation testing. These procedure may involve full or partial upgrades, and install/uninstall processes.
8.13 Configuration Testing.
It is defined as a software testing type, that checks an application with multiple combinations of software and hardware to find out the optimal configurations that the system can work without any flaws or bugs.
8.14 Compatibility Testing.
It is a type of Software testing "to check whether your software is capable of running on different hardware, operating systems, applications, network environments or Mobile devices." Compatibility Testing is a type of Non-functional testing.
8.15 Security Testing:
It is a type of software testing that intends to uncover vulnerabilities of the system and determine that its data and resources are protected from possible intruders.
It is a type of Software Testing that uncovers vulnerabilities, threats, risks in a software application and "prevents malicious attacks from intruders". ... It also helps in detecting all possible security risks in the system and help developers in fixing these problems through coding
9. Adhoc Testing:
It can be performed when there is "limited time to do elaborative testing". Usually adhoc testing is performed after the formal test execution. And if time permits, adhoc testing can be done on the system. Ad hoc testing will be effective only if the tester is knowledgeable of the System Under Test.
10. Compliance testing: It also know as Conformance testing is a nonfunctional testing technique which is done to validate, "whether the system developed meets the organization's prescribed standards or not."There is a separate category of testing known as “Non-Functional Testing”.
Testing some feature:
> Feature : Login functionality
Here we will test login by Admin, Maker , Checker, Branch user.
Background :
Given user is on login page.
Scenario outline : Login by customer
When user enter username (username)
When user enter password (password)
When User click on Sign in button.
Then user should be logged in successfully.
Example : test data
Header name
Data tables:
(username) |(password)
Testing | abc@123
Testing1 | abc@1234
testing2 | abc@12345
--------------------------------------------------------------------------------------------------------------------------
Topics Covered:
What is Risk?
Company have faced many an uncertain future event or condition which if happens affect the mission
objective.It could have a positive or negative effect.
For example it could be anything like corona pandemic.
1. Opportunity :
Positive risks is called opportunities.
You would like to take maximum advantage of these positive risks.
2. Issue:
Risk is associated with future event, which has not happened yet.
A risk which has already occurred is considered as an issue,
3. Risk Appetite:
Amount and type of risk that an organisation is prepared to see, accept or tolerate.
4. Risk Tolerance:
Organisation's rediness to bear the risk after risk treatments in order to achieve its objective.
Why take risk?
There is a balance between risk and rewards.
Generally more risks leads to more rewards but that is not true always
What is Risk Management?
It is identification, assessment, and prioritization of risks(Positive or negative) followed by
coordinated and economical application of resources to minimize, monitor, and control the probability
and/or impact of unfortunate events or to maximize the realization of opportunities.
Risk management principles:
To create value be an integral part of organisational processes be part of decision making process be
systematic and structured be transparent be responsive to change be capable of continual improvement
and enhancement be continually or periodically re-assessed.
Risk Management process can be divided into these 5 Steps?
1. > Plan Risk Management:
@ Terms and defination
@ Roles and resposibility
@ Tools and template
2. > Identify Risks - all potential risks and list down all
Once a plan completed then identify the risk is potential risk.
Risk identification is systematic, and methodical process.
It is best done in a group environment.
wide number of people participate in this process including management , employee, customer, other SME.
Tools used:
> Brainstorming is the most common approach.
> Flow diagram
> SWOT Diagram.(Strengths, weakness, opportunity, threats)
> ISHIKAWA Diagram (CAuse and Effect)
Risk Register:
O/p of identify risk process is a risk register.
this list down all risks identification.
next process these risks are priorities with red color flag and action plan is created to address these risks.
3. > Analyse Risks - using qualitative or quantitative technique. to identify the prority risk.
Risks are analysed to set priority
Sets focus on high priority risks.
@ qualitative or quantitative technique:
Qualitative risk analysis: Quick and easy to perform / Subjective
Quantitative risk analysis: Detailed and time consuming / analytic
Tools : Probability and Impact Matrix.
- expected monitory value analysis.
- Monte carlo analysis.
- Decision tree.
@ Probability and impact matrix:
> This is a qualitative risk analysis tool
> This evaluates
- Likelihood (Probability) that a particular risk will occurs
- Potential impact on an objective if it occurs.
- Each risk is analysed for probability and impact and is assigned .
> 9 point rating : a score between 1 to 9
> 5 point rating : very low, low , medium, high, very high. 1 to 5
> 3 point rating : Low, medium and High 1 to 3
Risk = Probability x Impact
Risk score = 1 x 9 = 9
4. > Plan Risk Response -
How to decrease the possibility of Negative risk affecting the objective.
How to increase the possibility of Positive risk helping the objective.
Negative Risk > Avoid ask to senior, Mitigate reduce it, Transfer to third party it is justify, accept
2 : @ Passive acceptance : No plan created to deal with these and
@ Active acceptance : contingency plan is created and risks are monitored.
Positive Risk> Exploit best use of this put best team member , Enhance increase the probability and/ or impact of risk , Share the opportunity with third party , accept the opportunity.
5. > Monitor and Control Risk -
Regularly review the identified risks and ensure that these are still relevant
- Identify new risks
- Remove risks that are not relevant
Risk audits may be conducted to ensure that the plan is being implemented and is effective
While monitoring risks:
Use workarounds to deal with unexpected risks to reduce the impact
Workarounds should be documented for future reference.
Risk and Testing:
Risk criticality in testing: closely at risks, the possible problems that might endanger the objectives of the project stakeholders.
How to determine the level of risk using likelihood and impact?
We'll see that there are risks related to the product and risks related to the project, and look at typical risks in both categories. Finally - and most important - we'll look at various ways that risk analysis and risk management can help us plot a course for solid testing
> product risk,
> project risk,
1 Risks and levels of risk:
Risk is a word we all use loosely, but what exactly is risk? it's the possibility of a negative or
undesirable outcome.
In the future, a risk has some likelihood between 0% and 100%; it is a possibility, not a certainty.
In the past, however, either the risk has materialized and become an outcome or issue or it has not;
the likelihood of a risk in the past is either 0% or 100%.
The likelihood of a risk becoming an outcome is one factor to consider when thinking about the level of risk associated with its possible negative consequences. The more likely the outcome is, the worse the risk. However, likelihood is not the only consideration.
For example, most people are likely to catch a cold in the course of their lives, usually more than
once. The typical healthy individual suffers no serious consequences. Therefore, the overall level of risk associated with colds is low for this person. But the risk of a cold for an elderly person with breathing difficulties would be high. The potential consequences or impact is an important
consideration affecting the level of risk, too.
Remember that in topic 1 we discussed how system context, and especially the risk associated with failures, influences testing. Here, we'll get into more detail about the concept of risks, how they influence testing, and specific ways to manage risk.
We can classify risks into project risks (factors relating to the way the work is carried out, i.e. the test project) and product risks (factors relating to what is produced by the work, i.e. the thing we are testing). We will look at product risks first.
2 Product risks:
You can think of a product risk as the possibility that the system or software might fail to satisfy
some reasonable customer, user, or stakeholder expectation.
(Some authors refer to 'product risks' as 'quality risks' as they are risks to the quality of the
product.)
Unsatisfactory software might omit some key function that the customers specified, the users required or the stakeholders were promised. Unsatisfactory software might be unreliable and frequently fail to behave normally. Unsatisfactory software might fail in ways that cause financial or other damage to a user or the company that user works for Unsatisfactory software might have problems related to a particular quality characteristic,which might not be functionality, but rather security, reliability, usability, maintainability or performance.
Risk-based testing is the idea that we can organize our testing efforts in a way that reduces the
residual level of product risk when the system ships. Risk based testing uses risk to prioritize and
emphasize the appropriate tests during test execution, but it's about more than that.
Risk-based testing starts early in the project, identifying risks to system quality and using that
knowledge of risk to guide testing planning, specification, preparation and execution.
Risk-based testing involves both mitigation - testing to provide opportunities to reduce the
likelihood of defects, especially high-impact defects - and contingency - testing to identify work-
around to make the defects that do get past us less painful.
Risk-based testing also involves measuring how well we are doing at finding and removing defects in critical areas,Risk-based testing can also involve using risk analysis to identify proactive opportunities to remove or prevent defects through non-testing activities and to help us select which test activities to perform.
Risk-based testing starts with product risk analysis. One technique for risk
analysis is a close reading of the requirements specification, design specifications,
user documentation and other items.
Another technique is brainstorming with many of the project stakeholders. Another is a sequence of one-on-one or small-group sessions with the business and technology experts in the company.
Some people use all these techniques when they can. To us, a team-based approach that involves the key stakeholders and experts is preferable to a purely document-based approach, as team approaches draw on the knowledge,wisdom and insight of the entire team to determine what to test and how much.
While you could perform the risk analysis by asking, 'What should we worry about?' usually more structure is required to avoid missing things.
One way to provide that structure is to look for specific risks in particular product risk categories. we can find this kind of risks in products:
You could consider risks in the areas of functionality, localization, usability, reliability,
performance and supportability. Alternatively, you could use the quality characteristics and sub-
characteristics from ISO 9126, as each sub-characteristic that matters is subject to risks that the
system might have troubles in that area.
You might have a checklist of typical or past risks that should be considered.
You might also want to review the tests that failed and the bugs that you found in a previous release or a similar product. These lists and reflections serve to jog the memory, forcing you
to think about risks of particular kinds, as well as helping you structure the documentation
of the product risks.
When we talk about specific risks, we mean a particular kind of defect or failure that might occur. For example, if you were testing the *Calculator* utility that is bundled with Microsoft Windows, you might identify 'incorrect calculation' as a specific risk within the category of functionality. However, this is too broad. Consider incorrect addition. This is a high-impact kind of defect, as everyone who uses the calculator will see it. It is unlikely, since addition is not a complex algorithm. Contrast that with an incorrect sine calculation. This is a low-impact kind of defect, since few people use the sine function on the Windows calculator. It is more likely to have a defect, though, since sine functions are hard to calculate.
After identifying the risk items, you and, if applicable, the stakeholders, should review the list to
assign the likelihood of problems and the impact of problems associated with each one. There are many ways to go about this assignment of likelihood and impact. You can do this with all the stakeholders at once. You can have the business people determine impact and the technical people determine likelihood, and then merge the determinations. Either way, the reason for identifying risks first and then assessing their level, is that the risks are relative to each other.
The scales used to rate likelihood and impact vary. Some people rate them high, medium and low. Some use a 1-10 scale. The problem with a 1-10 scale is that it's often difficult to tell a 2 from a 3 or a 7 from an 8, unless the differences between each rating are clearly defined. A five-point scale (very high, high,medium, low and very low) tends to work well.
Given two classifications of risk levels, likelihood and impact, we have a problem, though: We need a single, aggregate risk rating to guide our testing effort. As with rating scales, practices vary. One approach is to convert each risk classification into a number and then either add or multiply the numbers to calculate a risk priority number. For example, suppose a particular risk has a high likelihood and a medium impact. The risk priority number would then be 6 (2 times X 3).
Armed with a risk priority number, we can now decide on the various risk mitigation options available to us. Do we use formal training for programmers or analysts, rely on cross-training and reviews or assume they know enough? Do we perform extensive testing, cursory testing or no testing at all? Should we ensure unit testing and system testing coverage of this risk? These options and more are available to us.
Let's finish this section with two quick tips about product risk analysis.
First,remember to consider both likelihood and impact. While it might make you feel like a hero to find lots of defects, testing is also about building confidence in key functions. We need to test the things that probably won't break but would be catastrophic if they did.
Second, risk analyses, especially early ones, are educated guesses. Make sure that you follow up and
revisit the risk analysis at key project milestones. For example, if you're following a V-model, you
might perform the initial analysis during the requirements phase, then review and revise it at the end of the design and implementation phases, as well as prior to starting unit test, integration test, and system test. We also recommend revisiting the risk analysis during testing. You might find you have discovered new risks or found that some risks weren't as risky as you thought and increased your confidence in the risk analysis.
3 Project risks:
We just discussed the use of testing to manage risks to product quality.
However, testing is an activity like the rest of the project and thus it is subject to risks that endanger the project. To deal with the project risks that apply to testing, we can use the same concepts we apply to identifying, prioritizing and managing product risks.
Remembering that a risk is the possibility of a negative outcome, what project risks affect testing? There are direct risks such as the late delivery of the test items to the test team or availability issues with the test environment. There are also indirect risks such as excessive delays in repairing defects found in testing or problems with getting professional system administration support for the test environment.
Of course, these are merely four examples of project risks; many others can
apply to your testing effort.
To discover these risks, ask yourself and other project participants and stakeholders,
'What could go wrong on the project to delay or invalidate the test plan, the test strategy and the
test estimate?
What are unacceptable outcomes of testing or in testing?
What are the likelihoods and impacts of each of these risks?' You can see that this process is very
much like the risk analysis process for products.
Checklists and examples can help you identify test project risks
For any risk, product or project, you have four typical options:
• Mitigate: Take steps in advance to reduce the likelihood (and possibly the impact) of the risk.
• Contingency: Have a plan in place to reduce the impact should the risk become an outcome.
• Transfer: Convince some other member of the team or project stakeholder to reduce the likelihood or
accept the impact of the risk.
• Ignore: Do nothing about the risk, which is usually a smart option only when there's little that can be done or when the likelihood and impact are low.
There is another typical risk-management option, buying insurance, which is not usually pursued for project or product risks on software projects, though it is not unheard of.
Here are some typical risks along with some options for managing them.
• Logistics or product quality problems that block tests: These can be mitigated through careful
planning, good defect triage and management, and robust test design.
• Test items that won't install in the test environment: These can be mitigated through smoke (or
acceptance) testing prior to starting test phases or as part of a nightly build or continuous
integration. Having a defined uninstall process is a good contingency plan.
• Excessive change to the product that invalidates test results or requires updates to test cases,
expected results and environments: These can be mitigated through good change-control processes, robust test design and light weight test documentation. When severe incidents occur, transference of the risk by escalation to management is often in order.
• Insufficient or unrealistic test environments that yield misleading results: One option is to transfer the risks to management by explaining the limits on test results obtained in limited
environments. Mitigation - sometimes complete alleviation - can be achieved by outsourcing tests such as performance tests that are particularly sensitive to proper test environments.
Here are some additional risks to consider and perhaps to manage:
• Organizational issues such as shortages of people, skills or training, problems with communicating and responding to test results, bad expectations of what testing can achieve and complexity of the project team or organization.
• Supplier issues such as problems with underlying platforms or hardware, failure to consider testing issues in the contract or failure to properly respond to the issues when they arise.
• Technical problems related to ambiguous, conflicting or unprioritized requirements, an excessively large number of requirements given other project constraints, high system complexity and quality problems with the design, the code or the tests.
Finally, don't forget that test items can also have risks associated with them.
For example, there is a risk that the test plan will omit tests for a functional area
or that the test cases do not exercise the critical areas of the system. Tying it all together for risk
management.
We can deal with test-related risks to the project and product by applying some
straightforward, structured risk management techniques. The first step is to
assess or analyze risks early in the project. Like a big ocean liner, projects, especially
large projects, require steering well before the iceberg is in plain sight.
By using a test plan template like the IEEE 829 template shown earlier, you can remind yourself to consider and manage risks during the planning phase.
It's worth repeating here that early risk analyses are educated guesses.Some of those guesses will be wrong. Make sure that you plan to re-assess and adjust your risks at regular intervals in the project and make appropriate course corrections to the testing or the project itself.
One common problem people have when organizations first adopt risk based testing is a tendency to be excessively alarmed by some of the risks once they are clearly articulated. Do not confuse impact with likelihood or vice versa.
You should manage risks appropriately, based on likelihood and impact. Triage the risks by
understanding how much of your overall effort can be spent dealing with them. It's very important to maintain a sense of perspective, a focus on the point of the exercise. As with life, the goal of risk- based testing should not be - cannot practically be - a risk-free project. What we can accomplish with risk-based testing is the marriage of testing with best practices in risk management to achieve a project outcome that balances risks with quality, features, budget and schedule.
-----------------------------------------------------------------------------------------------------------------------------
Test management > estimation technique:
Differentiate between two conceptually different estimation
approaches:
the metrics-based approach and the expert-based approach.
(K2)
What is Estimation ?
Estimating what testing will involve and what it will cost The testing work to be done can often be
seen as a subproject within the larger project. So, we can adapt fundamental techniques of estimation
for testing. We could start with a work-breakdown structure that identifies the stages, activities
and tasks.
1 Estimation techniques:
There are two techniques for estimation covered in this Syllabus.
One involves *Consulting the people who will do the work and other people with expertise on the tasks to be done.
The other involves analyzing *Metrics from past projects and from industry data.
Asking the individual contributors and experts involves working with experienced
staff members to develop a work-breakdown structure for the project.With that done, you work together to understand, for each task, the effort,duration, dependencies, and resource requirements. The idea is to draw on the collective wisdom of the team to create your test estimate.
Using a tool such as Microsoft Project or a whiteboard and sticky-notes, you and the team can then
predict the testing end-date and major milestones. This technique is often called 'bottom up' estimation because you start at the lowest level of the hierarchical breakdown in the work-breakdown structure - the task - and let the duration, effort, dependencies and resources for each task add up across all the
tasks.
Analyzing metrics can be as simple or sophisticated as you make it. The simplest approach is to ask, 'How many testers do we typically have per developer on a project?' A somewhat more reliable approach involves classifying the project in terms of size (small, medium or large) and complexity (simple, moderate or complex) and then seeing on average how long projects of a particular size and complexity combination have taken in the Past*.
Another simple and reliable approach we have used is to look at the *Average effort per test case in
similar past projects and to use the estimated number of test cases to estimate the total effort.
Sophisticated approaches involve building mathematical models in a spreadsheet that look at historical
or industry averages for certain key parameters - number of tests run by tester per day, number of
defects found by tester per day, etc. - and then plugging in those parameters to * Predict
duration and effort for key tasks or activities on your project.
The tester-to developer ratio is an example of a top-down estimation technique, in that the
entire estimate is derived at the project level, while the parametric technique is bottom-up, at least when it is used to estimate individual tasks or activities.
We prefer to start by drawing on the team's wisdom to create the work breakdown structure and a detailed bottom-up estimate. We then apply models and rules of thumb to check and adjust the estimate bottom-up and top-down using past history. This approach tends to create an estimate that is both more accurate and more defensible than either technique by itself.
Even the best estimate must be negotiated with management. Negotiating sessions exhibit amazing
variety, depending on the people involved. However, there are some classic negotiating positions. It's
not unusual for the test leader or manager to try to sell the management team on the value added by the
testing or to alert management to the potential problems that would result from not testing enough.
It's not unusual for management to look for smart ways to accelerate the schedule or to press for
equivalent coverage in less time or with fewer resources. In between these positions, you and your
colleagues can reach compromise, if the parties are willing. Our experience has been that successful
negotiations about estimates are those where the focus is less on winning and losing and more about
figuring out how best to balance competing pressures in the realms of quality, schedule, budget and
features.
2 Factors affecting test effort:
Testing is a complex endeavor on many projects and a variety of factors can influence it. When creating test plans and estimating the testing effort and schedule, you must keep these factors in mind or your
plans and estimates will deceive you at the beginning of the project and betray you at the middle or end.
The test strategies or approaches you pick will have a major influence on the testing effort. This
factor is so influential that we'll come back to it, let's look at factors related to the product, the process and the results of testing.
a) Product factors start with the presence of sufficient project documentation so that the testers can
figure out what the system is, how it is supposed to work and what correct behavior looks like. In
other words, adequate and high-quality information about the test basis will help us do a better, more
efficient job of defining the tests.
b) The importance of non-functional quality characteristics such as usability, reliability, security,
performance, and so forth also influences the testing effort. These test targets can be expensive and
time consuming.
Complexity is another major product factor. Examples of complexity considerations include:
• The difficulty of comprehending and correctly handling the problem the system is being built to solve
(e.g., avionics and oil exploration software);
• The use of innovative technologies, especially those long on hyperbole and short on proven track
records;
• The need for intricate and perhaps multiple test configurations, especially when these rely on the
timely arrival of scarce software, hardware and other supplies;
• The prevalence of stringent security rules, strictly regimented processes or other regulations;
• The geographical distribution of the team, especially if the team crosses time-zones (as many
outsourcing efforts do).
While good project documentation is a positive factor, it's also true that having to produce detailed
documentation, such as meticulously specified test cases, results in delays. During test execution,
having to maintain such detailed documentation requires lots of effort, as does working with fragile
test data that must be maintained or restored frequently during testing.
Finally, increasing the size of the product leads to increases in the size of the project and the
project team. Increases in the project and project team increases the difficulty of predicting and
managing them. This leads to the disproportionate rate of collapse of large projects.
Process factors include the availability of test tools, especially those that reduce the effort
associated with test execution, which is on the critical path for release. On the development side,
debugging tools and a dedicated debugging environment (as opposed to debugging in the test environment) also reduce the time required to complete testing.
The life cycle itself is an influential process factor, as the V-model tends to be more fragile in the
face of late change while incremental models tend to have high regression testing costs. Process
maturity, including test process maturity, is another factor, especially the implication that mature
processes involve carefully managing change in the middle and end of the project, which reduces test
execution cost.
Time pressure is another factor to be considered. Pressure should not be an excuse to take unwarranted
risks. However, it is a reason to make careful, considered decisions and to plan and re-plan intelligently throughout the process,which is another hallmark of mature processes.
People execute the process, and people factors are as important or more important than any other.
Indeed, even when many troubling things are true about a project, an excellent team can often make good things happen on the project and in testing. Important people factors include the skills of the
individuals and the team as a whole, and the alignment of those skills with the project's
needs.
Since a project team is a team, solid relationships, reliable execution of agreed-upon commitments and
responsibilities and a determination to work together towards a common goal are important. This is
especially important for testing, where so much of what we test, use, and produce either comes from,
relies upon or goes to people outside the testing group. Because of the importance of trusting relationships and the lengthy learning curves involved in software and system engineering, the stability of the project team is an important people factor, too.
The test results themselves are important in the total amount of test effort during test execution. The
delivery of good-quality software at the start of test execution and quick, solid defect fixes during
test execution prevents delays in the test execution process. A defect, once identified, should not
have to go through multiple cycles of fix/retest/re-open, at least not if the initial estimate is
going to be held to.
You probably noticed from this list that we included a number of factors outside the scope and control
of the test leader or manager. Indeed, events that occur before or after testing can bring these
factors about. For this reason, it's important that testers, especially test leaders or managers, be
attuned to the overall context in which they operate. Some of these contextual factors result in
specific project risks for testing, which should be addressed in the test plan.
3 Test approaches or strategies:
The choice of test approaches or strategies is one powerful factor in the success of the test effort and the accuracy of the test plans and estimates. This factor is under the control of the testers and test leaders. Of course, having choices also means that you can make mistakes, so we'll talk about how to pick the right test strategies in a minute. First, though, let's survey the major types of test strategies that are commonly found.1
• Analytical: For example, the risk-based strategy involves performing a *Risk analysis using project
documents and stakeholder input, then planning, estimating, designing, and prioritizing the tests based
on risk.
Another analytical test strategy is the *Requirements-based strategy, where an analysis of the requirements specification forms the basis for planning, estimating and designing tests. Analytical test strategies have in common the use of some formal or informal analytical technique, usually during the requirements and design stages of the project.
• Model-based: For example, you can build *mathematical models for loading and response for e-commerce servers, and test based on that model. If the behavior of the system under test conforms to that predicted by the model, the system is deemed to be working.
Model-based test strategies have in common the creation or selection of some formal or informal model for critical system behaviors, usually during the requirements and design stages of the project.
• Methodical: For example, you might have a *Checklist that you have put together over the years that suggests the major areas of testing to run or you might follow an industry-standard for software quality, such as ISO 9126, for your outline of major test areas. You then methodically design, implement and execute tests following this outline. Methodical test strategies have in common the adherence to a pre-planned, systematized approach that has been developed in-house, assembled from various concepts developed inhouse and gathered from outside, or adapted significantly from outside ideas and may have an early or late point of involvement for testing.
• Process- or Standard-compliant: For example, you might adopt the IEEE 829 standard for your testing, using books such as to fill in the methodological gaps. Alternatively, you might adopt one of the agile methodologies such as Extreme Programming. Process- or standard-compliant strategies have in common reliance upon an externally developed approach to testing, often with little - if any - customization and may have an early or late point of involvement for testing.
• Dynamic: For example, you might create a lightweight set of testing guidelines that focus on rapid adaptation or known weaknesses in software.Dynamic strategies, such as *Exploratory testing, have in common concentrating on finding as many defects as possible during test execution and adapting to the realities of the system under test as it is when delivered, and they typically emphasize the later stages of testing. See, for example, the attack based approach of and the exploratory approach.
• Consultative or directed: For example, you might ask the users or *Developers of the system to tell you what to test or even rely on them to do the testing. *Consultative or directed strategies have in common the reliance on a group of non-testers to guide or perform the testing effort and typically
emphasize the later stages of testing simply due to the lack of recognition of the value of early testing.
• Regression-averse: For example, you might try to automate all the tests of system functionality so that, whenever anything changes, you can re-run every test to ensure nothing has broken. Regression-averse strategies have in common a set of procedures - usually automated - that allow them to detect
regression defects. A *Regression-averse strategy may involve automating functional tests prior to
release of the function, in which case it requires early testing, but sometimes the testing is almost entirely focused on testing functions that already have been released, which is in some sense a form of postrelease test involvement.
>4: 2 type of test strategies :
Analytical test strategies involve upfront analysis of the test basis, and tend to identify problems in
the test basis prior to test execution. This allows the early - and cheap - removal of defects. That is
a strength of preventive approaches.
Dynamic test strategies focus on the test execution period. Such strategies allow the location of defects and defect clusters that might have been hard to anticipate until you have the actual system in front of you. That is a strength of reactive approaches.
Risks: Testing is about risk management, so consider the risks and the level of risk. For a well-
established application that is evolving slowly, regression is an important risk, so regression-averse
strategies make sense. *For a new application, a risk analysis may reveal different risks if you pick a risk-based analytical strategy.
• Skills: Strategies must not only be chosen, they must also be executed. So,you have to consider which skills your testers possess and lack. A *Standard compliant strategy is a smart choice when you lack the time and skills in your team to create your own approach.
• Objectives: Testing must satisfy the needs of stakeholders to be successful. If the objective is to find as many defects as possible with a minimal amount of up-front time and effort invested - for example, at a typical independent test lab - then a *Dynamic strategy makes sense.
• Regulations: Sometimes you must satisfy not only stakeholders, but also regulators. In this case, you may need to devise a *Methodical test strategy that satisfies these regulators that you have met all their requirements.
• Product: Some products such as weapons systems and contract-development software tend to have well-specified requirements. This leads to synergy with a *Requirements-based analytical strategy.
• Business: Business considerations and business continuity are often important. If you can use a legacy system as a model for a new system, you can use a *Model-based strategy.
---------------------------------------------------------------------------------------------------------------------------
What is Security testing:
SECURITY TESTING is a type of software testing that intends to uncover vulnerabilities of the system and determine that its data and resources are protected from possible intruders.
Type of security testing:
Static code analysis.
Penetration testing. ...
Compliance testing. ...
Load testing. ...
Origin analysis testing. ...
Conclusion.
This scanning can be performed for both Manual and Automated scanning. Penetration testing: This kind of testing simulates an attack from a malicious hacker. This testing involves analysis of a particular system to check for potential vulnerabilities to an external hacking attempt.
What is security testing tools?
Security Testing is a type of Software Testing that uncovers vulnerabilities of the system and determines that the data and resources of the system are protected from possible intruders. It ensures that the software system and application are free from any threats or risks that can cause a loss.
Why is security testing important?
Importance of Security Testing. Security testing is most important testing for an application and checks whether confidential data stays confidential. In this type of testing, tester plays a role of the attacker and play around the system to find security related bugs
Which is the best tool for security testing?
Best Security Penetration Testing Tools In The Market
#1) Netsparker.
#2) Acunetix.
#3) Core Impact.
#4) Intruder.
#5) Indusface WAS Free Website Security Check.
#6) Metasploit.
#7) Wireshark.
#8) w3af.
#9) Burp Suite
Burp Suite is an integrated platform for performing security testing of web applications. It is designed to be used by hands-on testers to support the testing process. With a little bit of effort, anyone can start using the core features of Burp to test the security of their applications.
Burp Suite is a set of tools used for penetration testing of web applications. It is developed by the company named Portswigger, which is also the alias of its founder Dafydd Stuttard.
How do you use a Burp Suite proxy?
Getting started with Burp Proxy
First, ensure that Burp is installed and running, and that you have configured your browser to work with Burp.
In Burp, go to the Proxy Intercept tab, and ensure that interception is on (if the button says "Intercept is off" then click it to toggle the interception status).
What is Burp Suite?
Burp or Burp Suite is a set of tools used for penetration testing of web applications. It is developed by the company named Portswigger, which is also the alias of its founder Dafydd Stuttard. BurpSuite aims to be an all in one set of tools and its capabilities can be enhanced by installing add-ons that are called BApps.
It is the most popular tool among professional web app security researchers and bug bounty hunters. Its ease of use makes it a more suitable choice over free alternatives like OWASP ZAP. Burp Suite is available as a community edition which is free, professional edition that costs $399/year and an enterprise edition that costs $3999/Year. This article gives a brief introduction to the tools offered by BurpSuite. If you are a complete beginner in Web Application Pentest/Web App Hacking/Bug Bounty, we would recommend you to just read through without thinking too much about a term.
The tools offered by BurpSuite are:
1. Spider:
It is a web spider/crawler that is used to map the target web application. The objective of the mapping is to get a list of endpoints so that their functionality can be observed and potential vulnerabilities can be found. Spidering is done for a simple reason that the more endpoints you gather during your recon process, the more attack surfaces you possess during your actual testing.
2. Proxy:
BurpSuite contains an intercepting proxy that lets the user see and modify the contents of requests and responses while they are in transit. It also lets the user send the request/response under monitoring to another relevant tool in BurpSuite, removing the burden of copy-paste. The proxy server can be adjusted to run on a specific loop-back ip and a port. The proxy can also be configured to filter out specific types of request-response pairs.
3. Intruder:
It is a fuzzer. This is used to run a set of values through an input point. The values are run and the output is observed for success/failure and content length. Usually, an anomaly results in a change in response code or content length of the response. BurpSuite allows brute-force, dictionary file and single values for its payload position. The intruder is used for:
Brute-force attacks on password forms, pin forms, and other such forms.
The dictionary attack on password forms, fields that are suspected of being vulnerable to XSS or SQL injection.
Testing and attacking rate limiting on the web-app.
4. Repeater:
Repeater lets a user send requests repeatedly with manual modifications. It is used for:
Verifying whether the user-supplied values are being verified.
If user-supplied values are being verified, how well is it being done?
What values is the server expecting in an input parameter/request header?
How does the server handle unexpected values?
Is input sanitation being applied by the server?
How well the server sanitizes the user-supplied inputs?
What is the sanitation style being used by the server?
Among all the cookies present, which one is the actual session cookie.
How is CSRF protection being implemented and if there is a way to bypass it?
5. Sequencer:
burp sequencer
The sequencer is an entropy checker that checks for the randomness of tokens generated by the webserver. These tokens are generally used for authentication in sensitive operations: cookies and anti-CSRF tokens are examples of such tokens. Ideally, these tokens must be generated in a fully random manner so that the probability of appearance of each possible character at a position is distributed uniformly. This should be achieved both bit-wise and character-wise. An entropy analyzer tests this hypothesis for being true. It works like this: initially, it is assumed that the tokens are random. Then the tokens are tested on certain parameters for certain characteristics. A term significance level is defined as a minimum value of probability that the token will exhibit for a characteristic, such that if the token has a characteristics probability below significance level, the hypothesis that the token is random will be rejected. This tool can be used to find out the weak tokens and enumerate their construction.
6. Decoder:
BurpSUite Decoder
Decoder lists the common encoding methods like URL, HTML, Base64, Hex, etc. This tool comes handy when looking for chunks of data in values of parameters or headers. It is also used for payload construction for various vulnerability classes. It is used to uncover primary cases of IDOR and session hijacking.
7. Extender:
burpsuite extender
BurpSuite supports external components to be integrated into the tools suite to enhance its capabilities. These external components are called BApps. These work just like browser extensions. These can be viewed, modified, installed, uninstalled in the Extender window. Some of them are supported on the community version, but some require the paid professional version.
8. Scanner:
The scanner is not available in the community edition. It scans the website automatically for many common vulnerabilities and lists them with information on confidence over each finding and their complexity of exploitation. It is updated regularly to include new and less known vulnerabilities.
Thanks for the reading my blog related to Software Testing.
Contact me on email id - hrtestingdrive@gmail.com
Comments
Post a Comment