User talk:Mailtodhir

'Aim of Software Testing:'--Mailtodhir (talk) 11:06, 21 April 2011 (UTC)--Mailtodhir (talk) 11:06, 21 April 2011 (UTC)

The aim of software testing is to: 1.Measure the quality of a software in terms of number of defects found in the software, the number of tests run and the system covered by the tests. 2. These tests are carried out for both the functional and non functional attributes of the software. When bugs or defects are found with the help of testing, 3.The bug is logged and the developers team fixes the bug. 4. Once the bug is fixed and testing is carried out again to ensure that the bug was indeed fixed and no new defects have been introduced in the software.(Regression testing) 5. With the entire cycle the quality of the software increases.

Traceability Matrix Overview A method used to validate the compliance of a process or product with the requirements for that process or product. The requirements are each listed in a row of the matrix and the columns of the matrix are used to identify how and where each requirement has been addressed. In a software development process, a traceability matrix is a table that correlates any two baselined documents that require a many to many relationship to determine the completeness of the relationship. It is often used with high-level requirements (sometimes known as marketing requirements) and detailed requirements of the software product to the matching parts of high-level design, detailed design, test plan, and test cases. Common usage is to take the identifier for each of the items of one document and place them in the left column. The identifiers for the other document are placed across the top row. When an item in the left column is related to an item across the top, a mark is placed in the intersecting cell. The number of relationships are added up for each row and each column. This value indicates the mapping of the two items. Zero values indicate that no relationship exists. It must be determined if one must be made. Large values imply that the item is too complex and should be simplified. To ease with the creation of traceability matrices, it is advisable to add the relationships to the source documents for both backward traceability and forward traceability. In other words, when an item is changed in one baselined document, it's easy to see what needs to be changed in the other. Baseline Traceability Matrix Description A table that documents the requirements of the system for use in subsequent stages to confirm that all requirements have been met Size and Format Document each requirement to be traced. The requirement may be mapped to such things as a hardware component, an application unit, or a section of a design specification.

Use a Traceability Matrix to: •	verify and validate system specifications

•	ensure that all final deliverable documents are included in the system specification, such as process models and data models

• improve the quality of a system by identifying requirements that are not addressed by configuration items during design and code reviews and by identifying extra configuration items that are not required. Examples of configuration items are software modules and hardware devices • provide input to change requests and future project plans when missing requirements are identified • provide a guide for system and acceptance test plans of what needs to be tested. Useful Traceability Matrices

Various traceability matrices may be utilized throughout the system life cycle. Useful ones include: •	Functional specification to requirements document: It shows that each requirement (obtained from a preliminary requirements statement provided by the customer or produced in the Concept Definition stage) has been covered in an appropriate section of the functional specification.

•	Top level configuration item to functional specification: For example, a top level configuration item, Workstation, may be one of the configuration items that satisfies the function Input Order Information. On the matrix, each configuration item would be written down the left hand column and each function would be written across the top.

•	Low level configuration item to top level configuration item: For example, the top level configuration item, Workstation, may contain the low level configuration items Monitor, CPU, keyboard, and network interface card.

•	Design specification to functional specification verifies that each function has been covered in the design.

•	System test plan to functional specification ensures you have identified a test case or test scenario for each process and each requirement in the functional specification.

Although the construction and maintenance of traceability matrices may be time-consuming, they are a quick reference during verification and validation tasks. Sample Traceability Matrix

A traceability matrix is a report from the requirements database or repository. What information the report contains depends on your need. Information requirements determine the associated information that you store with the requirements. Requirements management tools capture associated information or provide the capability to add it.

The examples show forward and backward tracing between user and system requirements. User requirement identifiers begin with "U" and system requirements with "S." Tracing S12 to its source makes it clear this requirement is erroneous: it must be eliminated, rewritten, or the traceability corrected.

For requirements tracing and resulting reports to work, the requirements must be of good quality. Requirements of poor quality transfer work to subsequent phases of the SDLC, increasing cost and schedule and creating disputes with the customer. A variety of reports are necessary to manage requirements. Reporting needs should be determined at the start of the effort and documented in the requirements management plan.

Interview Questions

1.In Web Based Application Wat are the issues u r facing while testing APplications ??

In a web based application usually issue may come up with compatibility. The application may be showed perfoect in one browser version but not in the other. UI will be changed from browser to browser or version. Some browsers/versions will not support some object properties. In these cases usually we will get issues. Functional issues will be same in case of any browser.

2.What is Test Data ? Explain

Test data is the data we need to supply on test cases which consists of customer expected data while executing the test cases.

To understand Test Data, first we should understand the Test case.

Test case comprises following parts: 1.Precondition 2.User action/Steps 3.Input data/Test data 4.Expected Result 5.Actual Result

For every test there will be some precondition will be there.For example,if we are testing a web based login page,

1.Precondition: The web page should  load on web browser.

2.User action/steps: i. Enter test data into valid User id and valid Password in login page. ii.Then click on 'submit' button

3.Input data/Test data: User ID: rajendra_penumalli (valid User ID) Password: passw0rd (Valid Password)

4.Expected Result: Application should display the home page

5.Actual Result: 

Here in above test case, in step three the User ID and Password data (rajendra_penumalli,passw0rd) are the test data.

Using,Boundary Value Analysis and Equivalence Class Partition Tester can generate test data for different scenario's Based on each set of test data can be converted in to a test case using cause effect graphing technique.

For example foe the above login functionality testing we can write test data as follows:

Test Data Set-1 Valid User ID Valid Password

Test Data Set-2 Valid User ID Invalid Password

Test Data Set-3 Invalid User ID Valid Password

Test Data Set-4 Invalid User ID Invalid Password

Test Data Set-5 Blank User ID Valid Password

Test Data Set-6 Valid User ID Blank Password

Test Data Set-7 Blank User ID Blank Password

As a testing engineer what to do if same defect is occured again in the software??

Simply if the bug has been not fixed properly tester will change the status of defect as REOPEN. and again send it to development team to fix it.

Due to one of the following cases this condition may happen:

1.The same old code is given for testing without fixing the bug

2.Development team released wrong code ie. Instead of latest bug fixed code release same old previous bug contained code released for testing.

3.Development team forgot to fix the bug and accidentally closed the bug in bug tracking tool

4.Due to some settings in development environment the bug looks like fixed

The reason may be any thing,

Every software release, development also sends the release notes, and it contains the resolved /fixed defects details (or at least defect ID).

Then testing team will perform Smoke testing followed by defect verification testing for the defects that mentioned in Build Release notes.

If any bug/defect that mentioned as closed in release notes and if it is not fixed in application Then testing team will reject the build for testing and development team should fix the bug and then they should release another version of the software build.

Testing engineer find a bugs but develper not accepted that bugs ? How do fixes the bugs?

If the developer doesn't accepts bug then we need to send our bug report along wit the screen shot where we found the bug & also we can show the spec to the developer pointing out these are the client req & ths wat he expects.Evn thn the developer doesnt accepts the bug we can go for bug review meeting.

When developer dose not accept below steps to be followed

1. One of them not understood the functionality properly. 2. If they are on the same page 3. What kind of BUG it is when is it occurring 4. Where it is occurring, Only in Test environment ? If so check in the other machine to reproduce. 5. If its reproducible in other machine ? 6. Involve Developer also in discussions with Team lead 7. Decide and conclude the situation.

what is difference b/w test case and test scenario?

Test case is a condition which is executed for expected output with predefined set of steps with known inputs. Generally a test case have

1) Precondition 2) Steps to execute 3) Input data 4) Expected output 5) Status (Pass/Fail)

Test Scenario is set of test cases. What it means, If you have to withdraw money from an ATM machine, then it is a scenario. But to withdraw money, you need to execute many test cases, needs to provide many inputs and you get many outputs and finally your money with receipt of transaction.

Test Scenario: The situation where the test engineer feel to do testing is know as test scenario

Test Cases: To do testings that situation test engineer will get some ideas based on the requirement those are know as test cases.

What is the difference between Smoke testing and Sanity testing? 1.	In Smoke testing the Objective is same but the difference is that This type of testing done by the RET or Developer department. mean the developing department has to check that Is our developed build is in this condition that we can deliver it to testing department or not. Actually Sanity testing is done by the testing department to check that is the build is in this condition to take for testing or not?

2.	Smoke Testing is done at first when a build come for testing by just going through all the forms etc ascertain that build is ready for furthur testing.

Sanity Testing is done at last when a build going to release to client to identify all the major functionality is working fine & its a subset of Reggression testing. 3.	Smoke testing::Development team Tests the build is suitable for conducting Testing or not and the is accepting Customer expected vlues or not.

Sanity testing:After getting the initial build from development team.testing team                              find out whether is is accepting input keyboard values or not & other objects are responding properly or not is called sanity testing.

Smoke Testing: Software Testing done to ensure that whether the build can be accepted for through software testing or not. Basically, it is done to check the stability of the build received for software testing.

Sanity testing: After receiving a build with minor changes in the code or functionality, a subset of regression test cases are executed that to check whether it rectified the software bugs or issues and no other software bug is introduced by the changes. Sometimes, when multiple cycles of regression testing are executed, sanity testing of the software can be done at later cycles after through regression test cycles. If we are moving a build from staging / testing server to production server, sanity testing of the software application can be done to check that whether the build is sane enough to move to further at production server or not.

Difference between Smoke & Sanity Software Testing: •	Smoke testing is a wide approach where all areas of the software application are tested without getting into too deep. However, a sanity software testing is a narrow regression testing with a focus on one or a small set of areas of functionality of the software application. •	The test cases for smoke testing of the software can be either manual or automated. However, a sanity test is generally without test scripts or test cases. •	Smoke testing is done to ensure whether the main functions of the software application are working or not. During smoke testing of the software, we do not go into finer details. However, sanity testing is a cursory software testing type. It is done whenever a quick round of software testing can prove that the software application is functioning according to business / functional requirements. •	Smoke testing of the software application is done to check whether the build can be accepted for through software testing. Sanity testin

Smoke Testing: Software Testing done to ensure that whether the build can be accepted for through software testing or not. Basically, it is done to check the stability of the build received for software testing.

Sanity testing: After receiving a build with minor changes in the code or functionality, a subset of regression test cases are executed that to check whether it rectified the software bugs or issues and no other software bug is introduced by the changes. Sometimes, when multiple cycles of regression testing are executed, sanity testing of the software can be done at later cycles after through regression test cycles. If we are moving a build from staging / testing server to production server, sanity testing of the software application can be done to check that whether the build is sane enough to move to further at production server or not.

Difference between Smoke & Sanity Software Testing: •	Smoke testing is a wide approach where all areas of the software application are tested without getting into too deep. However, a sanity software testing is a narrow regression testing with a focus on one or a small set of areas of functionality of the software application. •	The test cases for smoke testing of the software can be either manual or automated. However, a sanity test is generally without test scripts or test cases. •	Smoke testing is done to ensure whether the main functions of the software application are working or not. During smoke testing of the software, we do not go into finer details. However, sanity testing is a cursory software testing type. It is done whenever a quick round of software testing can prove that the software application is functioning according to business / functional requirements. •	Smoke testing of the software application is done to check whether the build can be accepted for through software testing. Sanity testing of the software is to ensure whether the requirements are met or not.

What is the difference between Project Based Testing and Product Based Testing?

Project based testing: means test engineers will will test the application for one time. Its one time testing only. In project based testing we are going to develop the application for another company purpose and they only will use.

Product based testing: Its repeatitive. Take ex: of Microsoft windows win 95/98/ now we r going to vista. all the past features shoudld tested with also the new enhancements. Its repetitive task. In product based company we r developing the application for another company and that application will be used by the endusers.

How to report a bug?

As a tester tests an application and if he/she finds any defect, the life cycle of the defect starts and it becomes very important to communicate the defect to the developers in order to get it fixed, keep track of current status of the defect, find out if any such defect (similar defect) was ever found in last attempts of testing etc. Now a days many Bug Reporting Tools are available, which help in tracking and managing bugs in an effective way. We use BUGZILLA version 3.0

It’s a good practice to take screen shots of execution of every step during software testing. If any test case fails during execution, it needs to be failed in the bug-reporting tool and a bug has to be reported/logged for the same. The tester can choose to first report a bug and then fail the test case in the bug-reporting tool or fail a test case and report a bug. In any case, the Bug ID that is generated for the reported bug should be attached to the test case that is failed.

At the time of reporting a bug, all the mandatory fields from the contents of bug: Contents of a Bug Project: Name of the project under which the testing is being carried out.

Subject: Description of the bug in short which will help in identifying the bug. This generally starts with the project identifier number/string. This string should be clear enough to help the reader in anticipate the problem/defect for which the bug has been reported.

Description: Detailed description of the bug. This generally includes the steps that are involved in the test case and the actual results. At the end of the summary, the step at which the test case fails is described along with the actual result obtained and expected result.

Summary: This field contains some keyword information about the bug, which can help in minimizing the number of records to be searched.

Detected By: Name of the tester who detected/reported the bug.

Assigned To: Name of the developer who is supposed to fix the bug. Generally this field contains the name of developer group leader, who then delegates the task to member of his team, and changes the name accordingly.

Test Lead: Name of leader of testing team, under whom the tester reports the bug.

Detected in Version: This field contains the version information of the software application in which the bug was detected.

Closed in Version: This field contains the version information of the software application in which the bug was fixed.

Date Detected: Date at which the bug was detected and reported.

Expected Date of Closure: Date at which the bug is expected to be closed. This depends on the severity of the bug.

Actual Date of Closure: As the name suggests, actual date of closure of the bug i.e. date at which the bug was fixed and retested successfully.

Priority: Priority of the bug fixing. This specifically depends upon the functionality that it is hindering. Generally Medium, Low, High, Urgent are the type of severity that are used.

Severity: This is typically a numerical field, which displays the severity of the bug. It can range from 1 to 5, where 1 is high severity and 5 is the lowest.

Status: This field displays current status of the bug. A status of ‘New’ is automatically assigned to a bug when it is first time reported by the tester, further the status is changed to Assigned, Open, Retest, Pending Retest, Pending Reject, Rejected, Closed, Postponed, Deferred etc. as per the progress of bug fixing process.

Bug ID: This is a unique ID i.e. number created for the bug at the time of reporting, which identifies the bug uniquely.

Attachment: Sometimes it is necessary to attach screen-shots for the tested functionality that can help tester in explaining the testing he had done and it also helps developers in re-creating the similar testing condition.

Test Case Failed: This field contains the test case that is failed for the bug.. 1.	Screen-shots taken at the time of execution are filled and detailed description of the bug is given along with the expected and actual results. The screen-shots taken at the time of execution of test case are attached to the bug for reference by the developer.

After reporting a bug, a unique Bug ID is generated by the bug-reporting tool, which is then associated with the failed test case. This Bug ID helps in associating the bug with the failed test case.

After the bug is reported, it is assigned a status of ‘New’, which goes on changing as the bug fixing process progresses.

If more than one tester are testing the software application, it becomes a possibility that some other tester might already have reported a bug for the same defect found in the application. In such situation, it becomes very important for the tester to find out if any bug has been reported for similar type of defect. If yes, then the test case has to be blocked with the previously raised bug (in this case, the test case has to be executed once the bug is fixed). And if there is no such bug reported previously, the tester can report a new bug and fail the test case for the newly raised bug.

If no bug-reporting tool is used, then in that case, the test case is written in a tabular manner in a file with four columns containing Test Step No, Test Step Description, Expected Result and Actual Result. The expected and actual results are written for each step and the test case is failed for the step at which the test case fails.

This file containing test case and the screen shots taken are sent to the developers for reference. As the tracking process is not automated, it becomes important keep updated information of the bug that was raised till the time it is closed.

Bug Life Cycle Statuses associated with the Bug: New: When a bug is found/revealed for the first time, the software tester communicates it to his/her team leader (Test Leader) in order to confirm if that is a valid bug. After getting confirmation from the Test Lead, the software tester logs the bug and the status of ‘New’ is assigned to the bug. Assigned: After the bug is reported as ‘New’, it comes to the Development Team. The development team verifies if the bug is valid. If the bug is valid, development leader assigns it to a developer to fix it and a status of ‘Assigned’ is assigned to it.

Open: Once the developer starts working on the bug, he/she changes the status of the bug to ‘Open’ to indicate that he/she is working on it to find a solution.

Fixed: Once the developer makes necessary changes in the code and verifies the code, he/she marks the bug as ‘Fixed’ and passes it over to the Development Lead in order to pass it to the Testing team.

Pending: After the bug is fixed, it is passed back to the testing team to get retested and the status of ‘Pending Retest’ is assigned to it.

Retest: The testing team leader changes the status of the bug, which is previously marked with ‘Pending Retest’ to ‘Retest’ and assigns it to a tester for retesting.

Closed: After the bug is assigned a status of ‘Retest’, it is again tested. If the problem is solved, the tester closes it and marks it with ‘Closed’ status.

Reopen: If after retesting the software for the bug opened, if the system behaves in the same way or same bug arises once again, then the tester reopens the bug and again sends it back to the developer marking its status as ‘Reopen’.

Pending Reject: If the developers think that a particular behavior of the system, which the tester reports as a bug has to be same and the bug is invalid, in that case, the bug is rejected and marked as ‘Pending Reject’.

Rejected: If the Testing Leader finds that the system is working according to the specifications or the bug is invalid as per the explanation from the development, he/she rejects the bug and marks its status as ‘Rejected’.

Postponed: Sometimes, testing of a particular bug has to be postponed for an indefinite period. This situation may occur because of many reasons, such as unavailability of Test data, unavailability of particular functionality etc. That time, the bug is marked with ‘Postponed’ status.

Deferred: In some cases a particular bug stands no importance and is needed to be/can be avoided, that time it is marked with ‘Deferred’ status.

What are the qualities of a good software bug report? Anyone can write a bug report. But not everyone can write a effective bug report. You should be able to distinguish between average bug report and a good bug report. How to distinguish a good or bad bug report? It’s simple, apply following characteristics and techniques to report a bug. 1) Having clearly specified bug number: Always assign a unique number to each bug report. This will help to identify the bug record. If you are using any automated bug-reporting tool then this unique number will be generated automatically each time you report the bug. Note the number and brief description of each bug you reported. 2) Reproducible: If your bug is not reproducible it will never get fixed. You should clearly mention the steps to reproduce the bug. Do not assume or skip any reproducing step. Step by step described bug problem is easy to reproduce and fix. 3) Be Specific: Do not write a essay about the problem. Be Specific and to the point. Try to summarize the problem in minimum words yet in effective way. Do not combine multiple problems even they seem to be similar. Write different reports for each problem.

Some Bonus tips to write a good bug report: 1) Report the problem immediately:If you found any bug while testing, do not wait to write detail bug report later. Instead write the bug report immediately. This will ensure a good and reproducible bug report. If you decide to write the bug report later on then chances are high to miss the important steps in your report. 2) Reproduce the bug three times before writing bug report:Your bug should be reproducible. Make sure your steps are robust enough to reproduce the bug without any ambiguity. If your bug is not reproducible every time you can still file a bug mentioning the periodic nature of the bug. 3) Test the same bug occurrence on other similar module: Sometimes developer use same code for different similar modules. So chances are high that bug in one module can occur in other similar modules as well. Even you can try to find more severe version of the bug you found. 4) Write a good bug summary: Bug summary will help developers to quickly analyze the bug nature. Poor quality report will unnecessarily increase the development and testing time. Communicate well through your bug report summary. Keep in mind bug summary is used as a reference to search the bug in bug inventory. 5) Read bug report before hitting Submit button: Read all sentences, wording, steps used in bug report. See if any sentence is creating ambiguity that can lead to misinterpretation. Misleading words or sentences should be avoided in order to have a clear bug report.

What is difference between Validation and Verification

Verification: it is the process of confirming that s/w "meets its specification".It involves reviews and meetings to evaluate documents,plans,code,requirement and specification.This can be done with checklist,issues lists amd walkthroughs. It is the examination of the process and checks r we building the product right? 1.Verification begins in the initial phase and will be always associated with all the phases of SDLC 2.Mostly QA(Quality Assurance ) engineers will be involved in the process of verification. 3.Are we building the right system? 4. The inputs of verification are checklists, issues lists,walkthroughs and inspection meetings, reviews and meetings. 5. The output of verification is a nearly perfect set of documents, plans, specifications, and requirements document.

Eg: Design reveiws, code walkthroughs and inspections.

Validation:It is the process of confirming that it "meets the user's requirements".Validation typically involves actual testing and take place after varification are completed.

1.This process focus on the product after the development(The out come of the work) 2.Validation appears after coding phase 3.Most of the times test engineers will be involved in the phase of validation. 4.are we building system right? 5. The input of validation, on the other hand, is the actual testing of an actual product. 6, The output of validation, on the other hand is the actual/perfect product

Test case A test case in software engineering is a set of conditions or variables under which a tester will determine whether an application or software system is working correctly or not. Fields in test cases: Test case id: Unit to test: What to be verified? Assumptions: Test data: Variables and their values Steps to be executed: Expected result: Actual result: Pass/Fail: Comments: Keep in mind while writing test cases that all your test cases should be simple and easy to understand. Generally I use Excel sheets to write the basic test cases.

Positive and Negative Testing Positive Test Cases: Positive test csaces are designed to check that we got the desired result with valid set of inputs.(Like user should login into the system with valid user name and passwords.). A test case which always has the positive result i.e correct answer is said to be positive test case.

Negative Test Cases: Negative test csaces are designed to check system should generate the correct error or warning messages with invalid set of inputs.(like if ueser entered the wrong username or password then user should not login into the system and error message should be shown. A test case which always has the negative result i.e wrong answer it is said to be Negative Test Cases

If a customer wants a new feature to be added, how would you go about adding that?

First of all it should be checked that addition of teh new feature should not make any of the other features non functional. If it is technically feasible to add the new feature then only the team should start thinking further in that direction. If not a work around could be thought of for that feature.

The existing SRS shold be modified and thereby the FDS (Functional design Specn) and IDS (Internal esign Spec) should be updated. Then finally the implementation of the feature should happen.

What is the actual different between re-testing and regression testing

Re-testing is a process in which we execute the application to check whether the bug is fixed or not. The changes which are done by the developer while fixing the bug,it may hamper the other code/application.Regression testing is a process in which we are checking the effect of changed code on another application.

What is actual testing process in practical or company environment? Today I got interesting question from reader, How testing is carried out in company i.e in practical environment? Those who get just out of college and start for searching the jobs have this curiosity, How would be the actual working environment in the companies? Here I focus on software Testing actual working process in the companies. As of now I got good experience of software testing career and day to day testing activities. So I will try to share more practically rather than theoretically. Whenever we get any new project there is initial project familiarity meeting. In this meeting we basically discuss on who is client? what is project duration and when is delivery? Who is involved in project i.e manager, Tech leads, QA leads, developers, testers etc etc..? From the SRS (software requirement specification) project plan is developed. The responsibility of testers is to create software test plan from this SRS and project plan. Developers start coding from the design. The project work is devided into different modules and these project modules are distributed among the developers. In meantime testers responsibility is to create test scenario and write test cases according to assigned modules. We try to cover almost all the functional test cases from SRS. The data can be maintained manually in some excel test case templates or bug tracking tools. When developers finish individual modules, those modules are assigned to testers. Smoke testing is performed on these modules and if they fail this test, modules are reassigned to respective developers for fix. For passed modules manual testing is carried out from the written test cases. If any bug is found that get assigned to module developer and get logged in bug tracking tool. On bug fix tester do bug verification and regression testing of all related modules. If bug passes the verification it is marked as verified and marked as closed. Otherwise above mentioned bug cycle gets repeated.(I will cover bug life cycle in other post) Different tests are performed on individual modules and integration testing on module integration. These tests includes Compatibility testing i.e testing application on different hardware, OS versions, software platform, different browsers etc. Load and stress testing is also carried out according to SRS. Finally system testing is performed by creating virtual client environment. On passing all the test cases test report is prepared and decision is taken to release the product! So this was a brief outline of process of project life cycle. Here is detail of each step what testing exactly carried out in each software quality and testing life cycle specified by IEEE and ISO standards: Review of the software requirement specifications Objectives is set for the Major releases Target Date planned for the Releases Detailed Project Plan is build. This includes the decision on Design Specifications Develop Test Plan based on Design Specifications Test Plan : This includes Objectives, Methodology adopted while testing, Features to be tested and not to be tested, risk criteria, testing schedule, multi- platform support and the resource allocation for testing. Test Specifications This document includes technical details ( Software requirements ) required prior to the testing. Writing of Test Cases Smoke(BVT) test cases Sanity Test cases Regression Test Cases Negative Test Cases Extended Test Cases Development – Modules developed one by one Installers Binding: Installers are build around the individual product. Build procedure : A build includes Installers of the available products – multiple platforms. Testing Smoke Test (BVT) Basic application test to take decision on further testing Testing of new features Cross-platform testing Stress testing and memory leakage testing. Bug Reporting Bug report is created Development – Code freezing No more new features are added at this point. Testing Builds and regression testing. Decision to release the product Post-release Scenario for further objectives.

Why does software have bugs? • miscommunication or no communication - as to specifics of what an application should or shouldn't do (the application's requirements). • software complexity - the complexity of current software applications can be difficult to comprehend for anyone without experience in modern-day software development. Multi-tier distributed systems, applications utilizing mutliple local and remote web services applications, data communications, enormous relational databases, security complexities, and sheer size of applications have all contributed to the exponential growth in software/system complexity. • programming errors - programmers, like anyone else, can make mistakes. • changing requirements (whether documented or undocumented) - the end-user may not understand the effects of changes, or may understand and request them anyway - redesign, rescheduling of engineers, effects on other projects, work already completed that may have to be redone or thrown out, hardware requirements that may be affected, etc.

Web Testing, Example Test cases

While testing a web application you need to consider following Cases:

• Functionality Testing • Performance Testing • Usability Testing • Server Side Interface • Client Side Compatibility • Security Functionality: In testing the functionality of the web sites the following should be tested: • Links i. Internal Links ii. External Links iii. Mail Links iv. Broken Links • Forms i. Field validation ii. Error message for wrong input iii. Optional and Mandatory fields • Database • Cookies Performance : Performance testing can be applied to understand the web site’s scalability, or to benchmark the performance in the environment of third party products such as servers and middleware for potential purchase. • Connection Speed: Tested over various networks like Dial Up, ISDN etc • Load: i. What is the no. of users per time? ii. Check for peak loads and how system behaves iii. Large amount of data accessed by user • Stress: i. Continuous Load ii. Performance of memory, CPU, file handling etc.. Usability: Usability testing is the process by which the human-computer interaction characteristics of a system are measured, and weaknesses are identified for correction. • Ease of learning • Navigation • Subjective user satisfaction • General appearance Server Side Interface: In web testing the server side interface should be tested. This is done by verify that communication is done properly. Compatibility of server with software, hardware, network and database should be tested. Client Side Compatibility: The client side compatibility is also tested in various platforms, using various browsers etc. Security: The primary reason for testing the security of a web is to identify potential vulnerabilities and subsequently repair them. • Network Scanning • Vulnerability Scanning • Password Cracking • Log Review • Integrity Checkers • Virus Detection
 * Testing will be done on the database integrity.
 * Testing will be done on the client system side, on the temporary Internet files.

Web Testing: Complete guide on testing web applications Let’s have first web testing checklist. 1) Functionality Testing 2) Usability testing 3) Interface testing 4) Compatibility testing 5) Performance testing 6) Security testing 1) Functionality Testing: Test for – all the links in web pages, database connection, forms used in the web pages for submitting or getting information from user, Cookie testing. Check all the links: •	Test the outgoing links from all the pages from specific domain under test. •	Test all internal links. •	Test links jumping on the same pages. •	Test links used to send the email to admin or other users from web pages. •	Test to check if there are any orphan pages. •	Lastly in link checking, check for broken links in all above-mentioned links. Test forms in all pages: Forms are the integral part of any web site. Forms are used to get information from users and to keep interaction with them. So what should be checked on these forms? •	First check all the validations on each field. •	Check for the default values of fields. •	Wrong inputs to the fields in the forms. •	Options to create forms if any, form delete, view or modify the forms. Let’s take example of the search engine project currently I am working on, In this project we have advertiser and affiliate signup steps. Each sign up step is different but dependent on other steps. So sign up flow should get executed correctly. There are different field validations like email Ids, User financial info validations. All these validations should get checked in manual or automated web testing. Cookies testing: Cookies are small files stored on user machine. These are basically used to maintain the session mainly login sessions. Test the application by enabling or disabling the cookies in your browser options. Test if the cookies are encrypted before writing to user machine. If you are testing the session cookies (i.e. cookies expire after the sessions ends) check for login sessions and user stats after session end. Check effect on application security by deleting the cookies. (I will soon write separate article on cookie testing) Validate your HTML/CSS: If you are optimizing your site for Search engines then HTML/CSS validation is very important. Mainly validate the site for HTML syntax errors. Check if site is crawlable to different search engines. Database testing: Data consistency is very important in web application. Check for data integrity and errors while you edit, delete, modify the forms or do any DB related functionality. Check if all the database queries are executing correctly, data is retrieved correctly and also updated correctly. More on database testing could be load on DB, we will address this in web load or performance testing below. 2) Usability Testing: Test for navigation: Navigation means how the user surfs the web pages, different controls like buttons, boxes or how user using the links on the pages to surf different pages. Usability testing includes: Web site should be easy to use. Instructions should be provided clearly. Check if the provided instructions are correct means whether they satisfy purpose. Main menu should be provided on each page. It should be consistent. Content checking: Content should be logical and easy to understand. Check for spelling errors. Use of dark colors annoys users and should not be used in site theme. You can follow some standards that are used for web page and content building. These are common accepted standards like as I mentioned above about annoying colors, fonts, frames etc. Content should be meaningful. All the anchor text links should be working properly. Images should be placed properly with proper sizes. These are some basic standards that should be followed in web development. Your task is to validate all for UI testing Other user information for user help: Like search option, sitemap, help files etc. Sitemap should be present with all the links in web sites with proper tree view of navigation. Check for all links on the sitemap. “Search in the site” option will help users to find content pages they are looking for easily and quickly. These are all optional items and if present should be validated. 3) Interface Testing: The main interfaces are: Web server and application server interface Application server and Database server interface. Check if all the interactions between these servers are executed properly. Errors are handled properly. If database or web server returns any error message for any query by application server then application server should catch and display these error messages appropriately to users. Check what happens if user interrupts any transaction in-between? Check what happens if connection to web server is reset in between? 4) Compatibility Testing: Compatibility of your web site is very important testing aspect. See which compatibility test to be executed: •	Browser compatibility •	Operating system compatibility •	Mobile browsing •	Printing options Browser compatibility: In my web-testing career I have experienced this as most influencing part on web site testing. Some applications are very dependent on browsers. Different browsers have different configurations and settings that your web page should be compatible with. Your web site coding should be cross browser platform compatible. If you are using java scripts or AJAX calls for UI functionality, performing security checks or validations then give more stress on browser compatibility testing of your web application. Test web application on different browsers like Internet explorer, Firefox, Netscape navigator, AOL, Safari, Opera browsers with different versions. OS compatibility: Some functionality in your web application is may not be compatible with all operating systems. All new technologies used in web development like graphics designs, interface calls like different API’s may not be available in all Operating Systems. Test your web application on different operating systems like Windows, Unix, MAC, Linux, Solaris with different OS flavors. Mobile browsing: This is new technology age. So in future Mobile browsing will rock. Test your web pages on mobile browsers. Compatibility issues may be there on mobile. Printing options: If you are giving page-printing options then make sure fonts, page alignment, page graphics getting printed properly. Pages should be fit to paper size or as per the size mentioned in printing option. 5) Performance testing: Web application should sustain to heavy load. Web performance testing should include: Web Load Testing Web Stress Testing Test application performance on different internet connection speed. In web load testing test if many users are accessing or requesting the same page. Can system sustain in peak load times? Site should handle many simultaneous user requests, large input data from users, Simultaneous connection to DB, heavy load on specific pages etc. Stress testing: Generally stress means stretching the system beyond its specification limits. Web stress testing is performed to break the site by giving stress and checked how system reacts to stress and how system recovers from crashes. Stress is generally given on input fields, login and sign up areas. In web performance testing web site functionality on different operating systems, different hardware platforms is checked for software, hardware memory leakage errors, 6) Security Testing: Following are some test cases for web security testing: •	Test by pasting internal url directly into browser address bar without login. Internal pages should not open. •	If you are logged in using username and password and browsing internal pages then try changing url options directly. I.e. If you are checking some publisher site statistics with publisher site ID= 123. Try directly changing the url site ID parameter to different site ID which is not related to logged in user. Access should denied for this user to view others stats. •	Try some invalid inputs in input fields like login username, password, input text boxes. Check the system reaction on all invalid inputs. •	Web directories or files should not be accessible directly unless given download option. •	Test the CAPTCHA for automates scripts logins. •	Test if SSL is used for security measures. If used proper message should get displayed when user switch from non-secure http:// pages to secure https:// pages and vice versa. •	All transactions, error messages, security breach attempts should get logged in log files somewhere on web server.

Mailtodhir (talk) 11:06, 21 April 2011 (UTC)

Hello Mailtodhir and welcome to Wikipedia. I've deleted a large amount of text from this page because it seemed to have been copied and pasted from existing sources on the web. We have to be careful not to violate copyright. You can create or improve existing articles by summarising existing text, but copying it onto Wikipedia is not advisable unless you have been given permission. MartinPoulter (talk) 11:42, 21 April 2011 (UTC)

Welcome!
ThanksMailtodhir (talk) 16:56, 22 April 2011 (UTC)

Welcome to Wikipedia, Mailtodhir! I am MartinPoulter and have been editing Wikipedia for quite some time. Thank you for your contributions. I just wanted to say hi and welcome you to Wikipedia! If you have any questions check out Questions, or feel free to leave me a message on my talk page or type helpme at the bottom of this page. I love to help new users, so don't be afraid to leave a message! I hope you like the place and decide to stay. Here are some pages that you might find helpful: I hope you enjoy editing here and being a Wikipedian! Also, when you post on talk pages you should sign your name using four tildes ( ~ ); that should automatically produce your username and the date after your post. Again, welcome! MartinPoulter (talk) 11:42, 21 April 2011 (UTC)
 * Introduction
 * The five pillars of Wikipedia
 * How to edit a page
 * Help pages
 * How to write a great article