User talk:Gamankamanchi

What is test strategy in software testing?-- The choice of test approaches or test strategy is one of the most powerful factor in the success of the test effort and the accuracy of the test plans and estimates. This factor is under the control of the testers and test leaders. Let’s survey the major types of test strategies that are commonly found:

•	Analytical: Let us take an example to understand this. The risk-based strategy involves performing a risk analysis using project documents and stakeholder input, then planning, estimating, designing, and prioritizing the tests based on risk. Another analytical test strategy is the requirements-based strategy, where an analysis of the requirements specification forms the basis for planning, estimating and designing tests. Analytical test strategies have in common the use of some formal or informal analytical technique, usually during the requirements and design stages of the project.

•	Model-based: Let us take an example to understand this. You can build mathematical models for loading and response for e commerce servers, and test based on that model. If the behavior of the system under test conforms to that predicted by the model, the system is deemed to be working. Model-based test strategies have in common the creation or selection of some formal or informal model for critical system behaviors, usually during the requirements and design stages of the project.

•	Methodical: Let us take an example to understand this. You might have a checklist that you have put together over the years that suggests the major areas of testing to run or you might follow an industry-standard for software quality, such as ISO 9126, for your outline of major test areas. You then methodically design, implement and execute tests following this outline. Methodical test strategies have in common the adherence to a pre-planned, systematized approach that has been developed in-house, assembled from various concepts developed inhouse and gathered from outside, or adapted significantly from outside ideas and may have an early or late point of involvement for testing.

•	Process – or standard-compliant: Let us take an example to understand this. You might adopt the IEEE 829 standard for your testing, using books such as [Craig, 2002] or [Drabick, 2004] to fill in the methodological gaps. Alternatively, you might adopt one of the agile methodologies such as Extreme Programming. Process- or standard-compliant strategies have in common reliance upon an externally developed approach to testing, often with little – if any – customization and may have an early or late point of involvement for testing.

•	Dynamic: Let us take an example to understand this. You might create a lightweight set of testing guide lines that focus on rapid adaptation or known weaknesses in software. Dynamic strategies, such as exploratory testing, have in common concentrating on finding as many defects as possible during test execution and adapting to the realities of the system under test as it is when delivered, and they typically emphasize the later stages of testing. See, for example, the attack based approach of [Whittaker, 2002] and [Whittaker, 2003] and the exploratory approach of [Kaner et al., 2002].

•	Consultative or directed: Let us take an example to understand this. You might ask the users or developers of the system to tell you what to test or even rely on them to do the testing. Consultative or directed strategies have in common the reliance on a group of non-testers to guide or perform the testing effort and typically emphasize the later stages of testing simply due to the lack of recognition of the value of early testing.

•	Regression-averse: Let us take an example to understand this. You might try to automate all the tests of system functionality so that, whenever anything changes, you can re-run every test to ensure nothing has broken. Regression-averse strategies have in common a set of procedures – usually automated – that allow them to detect regression defects. A regression-averse strategy may involve automating functional tests prior to release of the function, in which case it requires early testing, but sometimes the testing is almost entirely focused on testing functions that already have been released, which is in some sense a form of post release test involvement. Some of these strategies are more preventive, others more reactive. For example, analytical test strategies involve upfront analysis of the test basis, and tend to identify problems in the test basis prior to test execution. This allows the early – and cheap – removal of defects. That is a strength of preventive approaches. Dynamic test strategies focus on the test execution period. Such strategies allow the location of defects and defect clusters that might have been hard to anticipate until you have the actual system in front of you. That is a strength of reactive approaches. Rather than see the choice of strategies, particularly the preventive or reactive strategies, as an either/or situation, we’ll let you in on the worst-kept secret of testing (and many other disciplines): There is no one best way. We suggest that you adopt whatever test approaches make the most sense in your particular situation, and feel free to borrow and blend. How do you know which strategies to pick or blend for the best chance of success? There are many factors to consider, but let us highlight a few of the most important:

•	Risks: Risk management is very important during testing, so consider the risks and the level of risk. For a well-established application that is evolving slowly, regression is an important risk, so regression-averse strategies make sense. For a new application, a risk analysis may reveal different risks if you pick a risk-based analytical strategy.

•	Skills: Consider which skills your testers possess and lack because strategies must not only be chosen, they must also be executed. . A standard compliant strategy is a smart choice when you lack the time and skills in your team to create your own approach.

•	Objectives: Testing must satisfy the needs and requirements of stakeholders to be successful. If the objective is to find as many defects as possible with a minimal amount of up-front time and effort invested – for example, at a typical independent test lab – then a dynamic strategy makes sense.

•	Regulations: Sometimes you must satisfy not only stakeholders, but also regulators. In this case, you may need to plan a methodical test strategy that satisfies these regulators that you have met all their requirements.

•	Product: Some products like, weapons systems and contract-development software tend to have well-specified requirements. This leads to synergy with a requirements-based analytical strategy.

•	Business: Business considerations and business continuity are often important. If you can use a legacy system as a model for a new system, you can use a model-based strategy. You must choose testing strategies with an eye towards the factors mentioned earlier, the schedule, budget, and feature constraints of the project and the realities of the organization and its politics. We mentioned above that a good team can sometimes triumph over a situation where materials, process and delaying factors are ranged against its success. However, talented execution of an unwise strategy is the equivalent of going very fast down a highway in the wrong direction. Therefore, you must make smart choices in terms of testing strategies.

Not Enough Time To Test Use risk analysis to determine where testing should be focused. Since it’s rarely possible to test every possible aspect of an application, every possible combination of events, every dependency, or everything that could go wrong, risk analysis is appropriate to most software development projects. This requires judgment skills, common sense, and experience. (If warranted, formal methods are also available.) Considerations can include: •	Which functionality is most important to the project’s intended purpose? •	Which functionality is most visible to the user? •	Which functionality has the largest safety impact? •	Which functionality has the largest financial impact on users? •	Which aspects of the application are most important to the customer? •	Which aspects of the application can be tested early in the development cycle? •	Which parts of the code are most complex, and thus most subject to errors? •	Which parts of the application were developed in rush or panic mode? •	Which aspects of similar/related previous projects caused problems? •	Which aspects of similar/related previous projects had large maintenance expenses? •	Which parts of the requirements and design are unclear or poorly thought out? •	What do the developers think are the highest-risk aspects of the application? •	What kinds of problems would cause the worst publicity? •	What kinds of problems would cause the most customer service complaints? •	What kinds of tests could easily cover multiple functionalities? •	Which tests will have the best high-risk-coverage to time-required ratio?

TEST STRATEGY vs TEST PLAN A Test Strategy document is a high level document and normally developed by project manager. This document defines “Software Testing Approach” to achieve testing objectives. The Test Strategy is normally derived from the Business Requirement Specification document. The Test Strategy document is a static document meaning that it is not updated too often. It sets the standards for testing processes and activities and other documents such as the Test Plan draws its contents from those standards set in the Test Strategy Document. Components of the Test Strategy document •	Scope and Objectives •	Business issues •	Roles and responsibilities •	Communication and status reporting •	Test deliverables •	Industry standards to follow •	Test automation and tools •	Testing measurements and metrices •	Risks and mitigation •	Defect reporting and tracking •	Change and configuration management •	Training plan The Test Plan document on the other hand, is derived from the Product Description, Software Requirement Specification SRS, or Use Case Documents. The Test Plan document is usually prepared by the Test Lead or Test Manager and the focus of the document is to describe what to test, how to test, when to test and who will do what test. It is not uncommon to have one Master Test Plan which is a common document for the test phases and each test phase have their own Test Plan documents. There is much debate, as to whether the Test Plan document should also be a static document like the Test Strategy document mentioned above or should it be updated every often to reflect changes according to the direction of the project and activities. When a testing phase starts and the Test Manager is “controlling” the activities, the test plan should be updated to reflect any deviation from the original plan. After all, Planning and Control are continuous activities in the formal test process. Components of the Test Plan document •	Test Plan id •	Introduction •	Test items •	Features to be tested •	Features not to be tested •	Test techniques •	Testing tasks •	Suspension criteria •	Features pass or fail criteria •	Test environment (Entry criteria, Exit criteria) •	Test deliverables •	Staff and training needs •	Responsibilities •	Schedule

Types of Software Testing

Various types of software testing are performed to achieve different objectives when testing a software application. You can also read about different Software Testing Techniques which can be associated with various types of software testing.

Ad-hoc testing This type of software testing is very informal and unstructured and can be performed by any stakeholder with no reference to any test case or test design documents. The person performing Ad-hoc testing has a good understanding of the domain and workflows of the application to try to find defects and break the software. Ad-hoc testing is intended to find defects that were not found by existing test cases.

Acceptance Testing Acceptance testing is a formal type of software testing that is performed by end user when the features have been delivered by developers. The aim of this testing is to check if the software confirms to their business needs and to the requirements provided earlier. Acceptance tests are normally documented at the beginning of the sprint (in agile) and is a means for testers and developers to work towards a common understanding and shared business domain knowledge.

Accessibility Testing In accessibility testing, the aim of the testing is to determine if the contents of the website can be easily accessed by disable people. Various checks such as color and contrast (for color blind people), font size for visually impaired, clear and concise text that is easy to read and understand.

Agile Testing Agile Testing is a type of software testing that accommodates agile software development approach and practices. In an Agile development environment,testing is an integral part of software development and is done along with coding. Agile testing allows incremental and iterative coding and testing. More on Agile Testing…

API Testing API testing is a type of testing that is similar to unit testing. Each of the Software APIs are tested as per API specification. API testing is mostly done by testing team unless APIs to be tested or complex and needs extensive coding. API testing requires understanding both API functionality and possessing good coding skills.

Automated testing This is a testing approach that makes use of testing tools and/or programming to run the test cases using software or custom developed test utilities. Most of the automated tools provided capture and playback facility, however there are tools that require writing extensive scripting or programming to automate test cases.

All Pairs testing Also known as Pair wise testing, is a black box testing approach and a testing method where in for each input is tested in pairs of inputs, which helps to test software works as expected with all possible input combinations.

Beta Testing This is a formal type of software testing that is carried out by end customers before releasing or handing over software to end users. Successful completion of Beta testing means customer acceptance of the software.

Black Box testing Black box testing is a software testing method where in testers are not required to know coding or internal structure of the software. Black box testing method relies on testing software with various inputs and validating results against expected output.

Backward Compatibility Testing Type of software testing performed to check newer version of the software can work successfully installed over previous version of the software and newer version of the software works as fine with table structure, data structures, files that were created by previous version of the software.

Boundary Value Testing (BVT) Boundary Value Testing is a testing technique that is based on concept “error aggregates at boundaries”. In this testing technique, testing is done extensively to check for defects at boundary conditions. If a field accepts value 1 to 100 then testing is done for values 0, 1, 2, 99, 100 and 101.

Big Bang Integration testing This is one of the integration testing approaches, in Big Bang integration testing all or all most all of the modules are developed and then coupled together.

Bottom up Integration testing Bottom up integration testing is an integration testing approach where in testing starts with smaller pieces or sub systems of the software till all the way up covering entire software system. Bottom up integration testing begins with smaller portion of the software and eventually scale up in terms of size, complexity and completeness.

Branch Testing Is a white box testing method for designing test cases to test code for every branching condition. Branch testing method is applied during unit testing. Browser compatibility Testing Its one of the sub types of testing of compatibility testing performed by testing team. Browser compatibility testing is performed for web applications with combination of different browsers and operating systems.

Compatibility testing Compatibility testing is one of the test types performed by testing team. Compatibility testing checks if the software can be run on different hardware, operating system, bandwidth, databases, web servers, application servers, hardware peripherals, emulators, different configuration, processor, different browsers and different versions of the browsers etc., Component Testing This type of software testing is performed by developers. Component testing is carried out after completing unit testing. Component testing involves testing a group of units as code together as a whole rather than testing individual functions, methods. Condition Coverage Testing Condition coverage testing is a testing technique used during unit testing, where in developer tests for all the condition statements like if, if else, case etc., in the code being unit tested. Dynamic Testing Testing can be performed as Static Testing and Dynamic testing, Dynamic testing is a testing approach where-in testing can be done only by executing code or software are classified as Dynamic Testing. Unit testing, Functional testing, regression testing, performance testing etc., Decision Coverage Testing Is a testing technique that is used in Unit testing, objective of decision coverage testing is to expertise and validate each and every decisions made in the code e.g. if, if else, case statements. End-to-end Testing End to end testing is performed by testing team, focus of end to end testing is to test end to end flows e.g. right from order creation till reporting or order creation till item return etc and checking. End to end testing is usually focused mimicking real life scenarios and usage. End to end testing involves testing information flow across applications. Exploratory Testing Exploratory testing is an informal type of testing conducted to learn the software at the same time looking for errors or application behavior that seems non-obvious. Exploratory testing is usually done by testers but can be done by other stake holders as well like Business Analysts, developers, end users etc. who are interested in learning functions of the software and at the same time looking for errors or behavior is seems non-obvious. Equivalence Partitioning Equivalence partitioning is also known as Equivalence Class Partitioning is a software testing technique and not a type of testing by itself. Equivalence partitioning technique is used in black box and grey box testing types. Equivalence partitioning classifies test data into Equivalence classes as positive Equivalence classes and negative Equivalence classes, such classification ensures both positive and negative conditions are tested. Functional Testing Functional testing is a formal type of testing performed by testers. Functional testing focuses on testing software against design document, Use cases and requirements document. Functional testing is a black box type of testing and does not require internal working of the software unlike white box testing. Fuzz Testing Fuzz testing or fuzzing is a software testing technique that involves testing with unexpected or random inputs. Software is monitored for failures or error messages that are presented due to the input errors. GUI (Graphical User Interface) testing This type of software testing is aimed at testing the software GUI (Graphical User Interface) of the software meets the requirements as mentioned in the GUI mockups and Detailed designed documents. For e.g. checking the length and capacity of the input fields provided on the form, type of input field provided, e.g. some of the form fields can be displayed as dropdown box or a set of radio buttons. So GUI testing ensures GUI elements of the software are as per approved GUI mockups, detailed design documents and functional requirements. Most of the functional test automation tools work on GUI capture and playback capabilities. This makes script recording faster at the same time increases the effort on script maintenance. Glass box Testing Glass box testing is another name for White box testing. Glass box testing is a testing method that involves testing individual statements, functions etc., Unit testing is one of the Glass box testing methods. Gorilla Testing This type of software testing is done by software testing team, has a scary name though ?. Objective of Gorilla Testing is to exercise one or few functionality thoroughly or exhaustively by having multiple people test the same functionality. Happy path testing Also known as Golden path testing, this type of testing focuses on selective execution of tests that do not exercise the software for negative or error conditions. Integration Testing Integration testing also known as I&T in short, in one of the important types of software testing. Once the individual units or components are tested by developers as working then testing team will run tests that will test the connectivity among these units/component or multiple units/components. There are different approaches for Integration testing namely, Top-down integration testing, Bottom-up integration testing and a combination of these two known as Sand witch testing. Interface Testing Software provides support for one or more interfaces like “Graphical user interface”, “Command Line Interface” or “Application programming interface” to interact with its users or other software. Interfaces serves as medium for software to accept input from user and provide result. Approach for interface testing depends on the type of the interface being testing like GUI or API or CLI.

Internationalization Testing Internationalization testing is a type of testing that is performed by software testing team to check the extent to which software can support Internationalization i.e., usage of different languages, different character sets, double byte characters etc., For e.g.: Gmail, is a web application that is used by people all over work with different languages, single by or multi byte character sets.

Keyword-driven Testing Keyword driver testing is more of a automated software testing approach than a type of testing itself. Keyword driven testing is known as action driven testing or table driven testing.

Load Testing Load testing is a type of non-functional testing; load testing is done to check the behavior of the software under normal and over peak load conditions. Load testing is usually performed using automated testing tools. Load testing intends to find bottlenecks or issues that prevent software from performing as intended at its peak workloads.

Localization Testing Localization testing a type of software testing performed by software testers, in this type of testing, software is expected to adapt to a particular locale, it should support a particular locale/language in terms of display, accepting input in that particular locale, display, font, date time, currency etc., related to a particular locale. For e.g. many web applications allow choice of locale like English, French, German or Japanese. So once locale is defined or set in the configuration of software, software is expected to work as expected with a set language/locale. Negative Testing This type of software testing approach, which calls out the “attitude to break”, these are functional and non-functional tests that are intended to break the software by entering incorrect data like incorrect date, time or string or upload binary file when text files supposed to be upload or enter huge text string for input fields etc. It is also a positive test for an error condition. Non functional testing Software are built to fulfill functional and non-functional requirements, non-functional requirements like performance, usability, localization etc., There are many types of testing like compatibility testing, compliance testing, localization testing, usability testing, volume testing etc., that are carried out for checking non-functional requirements. Pair Testing is a software testing technique that can be done by software testers, developers or Business analysts (BA). As the name suggests, two people are paired together, one to test and other to monitor and record test results. Pair testing can also be performed in combination of tester-developer, tester-business analyst or developer-business analyst combination. Combining testers and developers in pair testing helps to detect defects faster, identify root cause, fix and test the fix. Performance Testing is a type of software testing and part of performance engineering that is performed to check some of the quality attributes of software like Stability, reliability, availability. Performance testing is carried out by performance engineering team. Unlike Functional testing, Performance testing is done to check non-functional requirements. Performance testing checks how well software works in anticipated and peak workloads. There are different variations or sub types of performance like load testing, stress testing, volume testing, soak testing and configuration testing. Penetration Testing is a type of security testing, also known as pentest in short. Penetration testing is done to tests how secure software and its environments (Hardware, Operating system and network) are when subject to attack by an external or internal intruder. Intruder can be a human/hacker or malicious programs. Pentest uses methods to forcibly intrude (by brute force attack) or by using a weakness (vulnerability) to gain access to a software or data or hardware with an intent to expose ways to steal, manipulate or corrupt data, software files or configuration. Penetration Testing is a way of ethical hacking, an experienced Penetration tester will use the same methods and tools that a hacker would use but the intention of Penetration tester is to identify vulnerability and get them fixed before a real hacker or malicious program exploits it. Regression Testing is a type of software testing that is carried out by software testers as functional regression tests and developers as Unit regression tests. Objective of regression tests are to find defects that got introduced to defect fix(es) or introduction of new feature(s). Regression tests are ideal candidate for automation. Retesting is a type of retesting that is carried out by software testers as a part of defect fix verification. For e.g. a tester is verifying a defect fix and let us say that there are 3 test cases failed due to this defect. Once tester verifies defect fix as resolved, tester will retest or test the same functionality again by executing the test cases that were failed earlier. Risk based Testing is a type of software testing and an different approach towards testing a software. In Risk based testing, requirements and functionality of software to be tested are prioritized as Critical, High, Medium and low. In this approach, all critical and High priority tests are tested and them followed by Medium. Low priority or low risk functionality are tested at the end or may not based on the time available for testing. Smoke testing is a type of testing that is carried out by software testers to check if the new build provided by development team is stable enough i.e., major functionality is working as expected in order to carry out further or detailed testing. Smoke testing is intended to find “show stopper” defects that can prevent testers from testing the application in detail. Smoke testing carried out for a build is also known as build verification test. Security Testing is a type of software testing carried out by specialized team of software testers. Objective of security testing is to secure the software is to external or internal threats from humans and malicious programs. Security testing basically checks, how good is software’s authorization mechanism, how strong is authentication, how software maintains confidentiality of the data, how does the software maintain integrity of the data, what is the availability of the software in an event of an attack on the software by hackers and malicious programs is for Security testing requires good knowledge of application, technology, networking, security testing tools. With increasing number of web applications necessarily of security testing has increased to a greater extent. Sanity Testing is a type of testing that is carried out mostly by testers and in some projects by developers as well. Sanity testing is a quick evaluation of the software, environment, network, external systems are up & running, software environment as a whole is stable enough to proceed with extensive testing. Sanity tests are narrow and most of the time sanity tests are not documented. Scalability Testing is a non functional test intended to test one of the software quality attributes i.e. “Scalability”. Scalability test is not focused on just one or few functionality of the software instead performance of software as a whole. Scalability testing is usually done by performance engineering team. Objective of scalability testing is to test the ability of the software to scale up with increased users, increased transactions, increase in database size etc., It is not necessary that software’s performance increases with increase in hardware configuration, scalability tests helps to find out how much more workload the software can support with expanding user base, transactions, data storage etc., Stability Testing is a non functional test intended to test one of the software quality attributes i.e. “Stability”. Stability testing focuses on testing how stable software is when it is subject to loads at acceptable levels, peak loads, loads generated in spikes, with more volumes of data to be processed. Scalability testing will involve performing different types of performance tests like load testing, stress testing, spike testing, soak testing, spike testing etc., Static Testing is a form of testing where in approaches like reviews, walkthroughs are employed to evaluate the correctness of the deliverable. In static testing software code is not executed instead it is reviewed for syntax, commenting, naming convention, size of the functions and methods etc. Static testing usually has check lists against which deliverables are evaluated. Static testing can be applied for requirements, designs, test cases by using approaches like reviews or walkthroughs. Stress Testing is a type of performance testing, in which software is subjected to peak loads and even to a break point to observe how the software would behave at breakpoint. Stress testing also tests the behavior of the software with insufficient resources like CPU, Memory, Network bandwidth, Disk space etc. Stress testing enables to check some of the quality attributes like robustness and reliability. System Testing this includes multiple software testing types that will enable to validate the software as a whole (software, hardware and network) against the requirements for which it was built. Different types of tests (GUI testing, Functional testing, Regression testing, Smoke testing, load testing, stress testing, security testing, stress testing, ad-hoc testing etc.,) are carried out to complete system testing. Soak Testing is a type of performance testing, where in software is subjected to load over a significant duration of time, soak testing may go on for few days or even for few weeks. Soak testing is a type of testing that is conducted to find errors that result in degeneration of software performance with continued usage. Soak testing is extensively done for electronic devices, which are expected to run continuously for days or months or years without restarting or rebooting. With growing web applications soak testing has gained significant importance as web application availability is critical for sustaining and success of business. System Integration Testing known as SIT (in short) is a type of testing conducted by software testing team. As the name suggests, focus of System integration testing is to test for errors related to integration among different applications, services, third party vendor applications etc., As part of SIT, end-to-end scenarios are tested that would require software to interact (send or receive data) with other upstream or downstream applications, services, third party application calls etc.,

Unit testing is a type of testing that is performed by software developers. Unit testing follows white box testing approach where developer will test units of source code like statements, branches, functions, methods OR class, interface in OOP (object oriented programming). Unit testing usually involves in developing stubs and drivers. Unit tests are ideal candidates for automation. Automated tests can run as Unit regression tests on new builds or new versions of the software. There are many useful unit testing frames works like Junit, Nunit etc., available that can make unit testing more effective.

Usability testing is a type of software testing that is performed to understand how user friendly the software is. Objective of usability testing is to allow end users to use the software, observe their behavior, their emotional response (whether users liked using software or were they stressed using it? etc.,) and collect their feedback on how the software can be made more useable or user friendly and incorporate the changes that make the software easier to use.

User Acceptance testing (UAT ) User Acceptance testing is a must for any project; it is performed by clients/end users of the software. User Acceptance testing allows SMEs (Subject matter experts) from client to test the software with their actual business or real-world scenarios and to check if the software meets their business requirements.

Volume testing is a non-functional type of testing carried out by performance engineering team. Volume testing is one of the types of performance testing. Volume testing is carried out to find the response of the software with different sizes of the data being received or to be processed by the software. For e.g. If you were to be testing Microsoft word, volume testing would be to see if word can open, save and work on files of different sizes (10 to 100 MB).

Vulnerability Testing involves identifying, exposing the software, hardware or network Vulnerabilities that can be exploited by hackers and other malicious programs likes viruses or worms. Vulnerability Testing is key to software security and availability. With increased number of hackers and malicious programs, Vulnerability Testing is critical for success of a Business.

White box Testing White box testing is also known as clear box testing, transparent box testing and glass box testing. White box testing is a software testing approach, which intends to test software with knowledge of internal working of the software. White box testing approach is used in Unit testing which is usually performed by software developers. White box testing intends to execute code and test statements, branches, path, decisions and data flow within the program being tested. White box testing and Black box testing complement each other as each of the testing approaches have potential to un-cover specific category of errors.