Pages

Sunday, December 5, 2010

Testing Web Based Applications


 Web Applications are more in use and meets the requirements of all therefore they are more wide and typical and complex to test as more features and more paths are there to test
So when we start testing the web application other than the window based application the process of testing is different.
The factors that effect the testing of Web Applications are as follows:-
  1. ·         Performance
  2. ·         Security Threats
  3. ·         Types of Users
  4. ·         Accessabilty Options
  5. ·         Regulatory Compliance/Standards
  6. ·         Technology Platforms

Performance:-
The applications performance may change due to the following reasons
1.   Network Speed
2.   Browsers
3.   No of Users
4.   Intranet versus Internet based Applications

Issues where we need to concentrate more
v  Preparation of Test Plan :-
Test Plan should be carefully written covering all the scenarios that need to be tested
v  Preparation of Test Cases:-
o   Prepare at least one test cases covering all the functionalities that are to be tested
o   Prepare test cases covering other issues like
Ø  Network speeds like
o   Performance in case of Broad Band usage
o   Performance in case of Dial up connections
Web Applications performance changes when
§  The network speed is slow there is a possibility of getting errors
§  Images may take longer to download for slower networks

Ø  Browsers like
o   IE
o   Fire Fox
o   Google Chrome etc.,
The end users may use different types of browsers to access the application
§  If we test on browser it dos not assure that it will work on all browsers
§  Even  similar browsers application may behave differently based on the Screen resolution/Hardware/Software Configuration

Ø  Sites like
o   Http
o   Https
Ø  Number of users like
o   As the numbers of users increases or decreases during normal hours and peak hours the application performance should not degrade
For example in an online project like AP ONLINE
The employees may be doing different tasks like some may be paying the mobile bills,some may be paying the current bills or some may be just checking there previous bookings they can be related to any thing like on_line reservations of to trains ,buses etc., or checking to the previous paid bills of a particular customer etc.,
Therefore large number of usage paths are possible and all are supposed to work well.
Security threats
Firewalls:
Applications may behave differently across different firewalls. Applications may have certain web services or may operate on different ports that may have been blocked. So the applications need to be tested for these aspects as well.

Security Aspects:
No compromise in case of the security of the data present on the web pages


Types of Users
§  People with varying backgrounds & technical skills may use the application
§  User friendly options should be provided on the sites which need to be tested

Intranet versus Internet based Applications
  • Intranet based applications generally the number of users will be know earlier therefore the developers can make accurate assumptions about the people accessing the applications
  • Also the intranet users can generally access the app from ‘trusted’ sources
While it may be difficult to make similar assumptions for Internet Based Applications
  • Internet applications the users may need to be authenticated and the more security have to be taken

Accessibility:-
The supportive alternatives provided to the disabled people should be checked for the web based applications
Regulatory Compliance/Standards:
Depending on the nature of the application and sensitivity of the data captured the applications may have to be tested for relevant Compliance Standards. This is more crucial for Web Based Applications because of their possible exposure to a wide audience.

Technology platforms
The behaviour of the application varies from platform to platform as need to be tested in different ways on different platforms 

Saturday, November 27, 2010

Recording a simple script on Dialog Login with various options to generate the password

Step1 : Open QT click o Record ,select "Window Applications" in "Record and Run Settings" and click on "OK"

Step 2 : click on Flight Application ,Enter Username,Password and click on "OK".It generates the script as follows

SystemUtil.Run "C:\Program Files\Mercury Interactive\QuickTest Professional\samples\flight\app\flight4a.exe"
Dialog("Login").WinEdit("Agent Name:").Set "agent"
Dialog("Login").WinEdit("Password:").SetSecure "4cf162a450b1f22358a30a702947fa15c2d4fe74"
Dialog("Login").WinButton("OK").Click
Window("Flight Reservation").Close


Step 3 : Click on "RUN" 

To get Encryted  password 

There are 2 other ways other than recording 
1.Using Crypt.Encrypt method
2.Using the tool Password Encoder

Using the Crypt method the script will be as follows

SystemUtil.Run "C:\Program Files\Mercury Interactive\QuickTest Professional\samples\flight\app\flight4a.exe"
Dialog("Login").WinEdit("Agent Name:").Set "agent"
pwd = crypt.Encrypt("mercury")
Dialog("Login").WinEdit("Password:").SetSecure pwd
Dialog("Login").WinButton("OK").Click
Window("Flight Reservation").Close



The steps for the option Password Encoder

Step a : go to start-->Programs-->Quick Test Professional-->Tools-->Password Encoder-->Write the password as Mercury in "password" edit box
Step b : Click on Generate ,It will generate the password
Step c: Copy and paste it in the script in the required location


Regular Expression which works for any webpage to count total links present on the page

Dim oLink,Links, TotLinks
Set oLink=Description.Create
oLink("micclass").value="Link"
Set Links=Browser("title:=.*").page("title:=.*").ChildObjects(oLink)
TotLinks=Links.count
Reporter.ReportEvent 2,"Res","Total Links are: "&TotLinks

Random Numbers in Flight Application

STEP1: Record a session on Flight Reservation for insert new order.the  recorded script is as follows


Window("Flight Reservation").Activate
Window("Flight Reservation").WinMenu("Menu").Select "File;New Order"
Window("Flight Reservation").ActiveX("MaskEdBox").Type "121210"
Window("Flight Reservation").WinComboBox("Fly From:").Select "Zurich"
Window("Flight Reservation").WinComboBox("Fly To:").Select "London"
Window("Flight Reservation").WinButton("FLIGHT").Click
Window("Flight Reservation").Dialog("Flights Table").WinButton("OK").Click
Window("Flight Reservation").WinEdit("Name:").Set "swarupa"
Window("Flight Reservation").WinButton("Insert Order").Click

Step 2: Apply the Concept of Random number to Fly_From and Fly_To as follows


Window("Flight Reservation").Activate
Window("Flight Reservation").WinMenu("Menu").Select "File;New Order"
Window("Flight Reservation").ActiveX("MaskEdBox").Type "121210"
Window("Flight Reservation").WinComboBox("Fly From:").Select RandomNumber.Value(0,9)
Window("Flight Reservation").WinComboBox("Fly To:").Select Randomnumber.Value(0,9)
Window("Flight Reservation").WinButton("FLIGHT").Click
Window("Flight Reservation").Dialog("Flights Table").WinButton("OK").Click
Window("Flight Reservation").WinEdit("Name:").Set "sweeya"
Window("Flight Reservation").WinButton("Insert Order").Click

Note : as there are 10 values in the combo-box  i have given the values for Random numbers as 0 for lowest interval and 9 for the highest value

Step 3: Now RUN the test you can see the application takes Random values for Fly_From and Fly_To every time we execute the script

Sunday, November 21, 2010

ISTQB Free Sample Downloads

Downloads

Testing levels

Tests are frequently grouped by where they are added in the software development process, or by the level of specificity of the test.
Unit Test/ component

The first test in the development process is the unit test. The source code is normally divided into modules, which in turn are divided into smaller units called units. These units have specific behavior. The test done on these units of code is called unit test. Unit test depends upon the language on which the project is developed. Unit tests ensure that each unique path of the project performs accurately to the documented specifications and contains clearly defined inputs and expected results.
Unit testing is also called component testing.
Integration testing

Objective of integration testing is to make sure that the interaction of two or more components produces results that satisfy functional requirement in integration testing, test cases are developed with the express  of exercising the interface between the components integration testing can also be treated as testing assumption of fellow programmer during the coding phase, lots of assumptions are made assumptions can be made for how you will receive data from different components and how you have to pass data to different compo

Bottom up integration testing
In bottom up integration testing, module at the lowest level are developed first and other modules which go towards the 'main' program are integrated and tested one at a time bottom up integration also uses test drivers to drive and pass appropriate data to the lower level modules as and when code for other module gets ready, these drivers are replaced with the actual module in this approach, lower level modules are tested extensively thus make sure that highest used module is tested properly
Top down integration testing
Top down integration testing is an incremental integration testing technique which begins by testing the top level module and and progressively adds in lower level module one by one lower level modules are normally simulated by stubs which mimic functionality of lower level modules as you add lower level code, you will replace stubs with the actual components top down integration can be performed and tested in breadth first or depth firs manner

Hybrid integration testing

Top-down and bottom-up, both the types of testing have their advantages and disadvantages while in top-down integration testing it is very easy to follow the top-down software development process at the same time in bottom-up testing, the code is used mostly is tested repetitively 

Big bang integration testing

In big bang integration testing, individual modules of the programs are not integrated until every thing is ready this approach is seen mostly in inexperienced programmers who rely on 'run it and see' approach in this approach, the program is integrated without any formal integration testing, and then run to ensures that all the components are working properly

Regression testing

(retesting+dependent functionality testing+new CR’S testing ie.,CR[change request] on different builds)

Regression testing focuses on finding defects after a major code change has occurred. Specifically, it seeks to uncover software regressions, or old bugs that have come back. Such regressions occur whenever software functionality that was previously working correctly stops working as intended. Typically, regressions occur as anunintended consequences of program changes, when the newly developed part of the software collides with the previously existing code. Common methods of regression testing include re-running previously run tests and checking whether previously fixed faults have re-emerged. The depth of testing depends on the phase in the release process and the risk of the added features. They can either be complete, for changes added late in the release or deemed to be risky, to very shallow, consisting of positive tests on each feature, if the changes are early in the release or deemed to be of low risk.

System testing

System testing is probably the most important phase of complete testing cycle this phase is started after the completion of other phases like unit, component and integration testing during the system testing phase, non functional testing also comes in to picture and performance, load, stress, scalability all these types of testing are performed in this phase

Acceptance testing(UAT)
Acceptance testing can mean one of two things:
 Smoke test is used as an acceptance test prior to introducing a new build to the main testing process, i.e. before integration or regression.
1.Alpha testing
Alpha testing is simulated or actual operational testing by potential users/customers or an independent test team at the developers' site
3.Beta testing
Beta testing comes after alpha testing. Versions of the software, known as beta versions, are released to a limited audience outside of the programming team. The software is released to groups of people so that further testing can ensure the product has few faults or bugs. Sometimes, beta versions are made available to the open public to increase the feedback field to a maximal number of future users.

Software performance testing and load testing/non functional testing

Performance testing 

 Term often used interchangeably with ’stress’ and ‘load’ testing. To check whether system meets performance requirements. Used different performance and load tools to do this.

Load testing  Its a performance testing to check system behavior under load. Testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system’s response time degrades or fails.

Stress testing System is stressed beyond its specifications to check how and when it fails. Performed under heavy load like putting large number beyond storage capacity, complex database queries, continuous input to system or database load.

Volume testing  Volume Testing belongs to the group of non-functional 
tests, which are often misunderstood and/or used 
interchangeably. Volume testing refers to testing a 
software application with a certain amount of data. This 
amount can, in generic terms, be the database size or it 
could also be the size of an interface file that is the 
subject of volume testing. For example, if you want to 
volume test your application with a specific database size, 
you will expand your database to that size and then test 
the application's performance on it. Another example could 
be when there is a requirement for your application to 
interact with an interface file (could be any file such 
as .dat, .xml); this interaction could be reading and/or 
writing on to/from the file. You will create a sample file 
of the size you want and then test the application's 
functionality with that file in order to test the 
performance.

Stability testing
Stability testing checks to see if the software can continuously function well in or above an acceptable period. This activity of non-functional software testing is often referred to as load (or endurance) testing.

Security testing
Security testing is essential for software that processes confidential data to prevent system inrrusion by hacking.
GUI Software Testing: the testing done on the user interfaces of the project or application is called GUI testing
§  Useability testing:-
Usability testing is needed to check if the user interface is easy to use and understand.
The primary point of usability testing is to provide feedback during the design/development process to ensure that the web site will actually be easy and effective to use and provide valuable information to the users. Four primary elements to measure are:
  • Ease and effectiveness of navigation - Do users find what they need easily. Is there a clear pattern to the navigation that fits easily into the users mental model. Are you links labeled with terms that make sense to your users. (Or, are you speaking in your own private jargon!)
  • Usefulness of content - What information do your users want/need? Have you organized the content on each page in such a way that it is easy for your users to quickly find it? Or do they have to read all the fine print while standing on their heads?
  • Effectiveness of presentation - Did the graphic design, fonts and colors highlight the navigation and content, making the site easier to use? Or did the presentation distract or create a barrier between the user and the information?
  • Task success rate - Were the users able to accomplish the key task they needed/wanted to accomplish. If they were able to complete the task, did they feel satisfied, neutral or angry and frustrated?

Security testing: is a process to determine that a information system protects data and maintains functionality as intended.
The six basic security concepts that need to be covered by security testing are: confidentiality, integrity, authentication, availability, authorization and non-repudiation. Security testing as a term has a number of different meanings and can be completed in a number of different ways. As such a Security Taxonomy helps us to understand these different approaches and meanings by providing a base level to work from. The concepts covered are as follows
  1. Confidentiality
  2. Integrity
  3. Authentication
  4. Authorization
  5. Availability

Scalability testing : it is an extension of performance testing. The purpose of scalability testing is to identify major workloads and mitigate bottlenecks that can impede the scalability of the application.
Use performance testing to establish a baseline against which you can compare future performance tests. As an application is scaled up or out, a comparison of performance test results will indicate the success of scaling the application. When scaling results in degraded performance, it is typically the result of a bottleneck in one or more resources.


Sanity Testing and Smoke Testing


§  Sanity testing or Build verification test(BVT)  Whenever we receive abuild from development team the basic features of the application is tested to verify the stability of the appication for further testing.
§  Smoke testing  It is Similar like sanity test this test is conducted to verify are there any issues in Software before releasing to the Test Team.

§  Sanity testing is done by Test engineer.

§  Smoke testing is done by Developer or White box engineers.

Note Sanity testing is done when the application is deployed into testing for the very first time and in smoke testing only positive scenarios are validated but in sanity testing both the positive and negative scenarios are validated.


Exploratory testing Also called as Adhoc testing or random testing..When a tester test a application by exploring with his prevoius experience and based on the application tetser writes testcases.Thisn is exploratory testing
 The main advantage of exploratory testing is that less preparation is needed, important bugs are found quickly, and at execution time, the approach tends to be more intellectually stimulating than execution of scripted tests.
Ad hoc testing: is a commonly used term for software testing performed without planning and documentation (but can be applied to early scientific experimental studies).
The tests are intended to be run only once, unless a defect is discovered. Adhoc testing is a part of exploratory testing, being the least formal of test methods. In this view, ad hoc testing has been criticized because it isn't structured, but this can also be a strength: important things can be found quickly. It is performed with improvisation, the tester seeks to find bugs with any means that seem appropriate.
Exhaustive Testing:- Exhaustive testing means testing the functionality with all 
possible valid and invalid data.It is not possible to test the functionalities with all 
valid and invalid data  

Reliability Testing : The purpose of reliability testing is to discover potential problems with the design as early as possible and, ultimately, provide confidence that the system meets its reliability requirements.

Installation Testing:- is one of the most important part of testing activities. Installation is the first interaction of user with our product and it is very important to make sure that user do not have any trouble in installing the software.
It becomes even more critical now as there are different means to distribute the software. Instead of traditional method of distributing software in the physical CD format, software can be installed from internet, from a network location or even it can be pushed to the end user's machine.
 The type if installation testing you do, will be affected by lots of factors like
Maintenance testing is that testing which is performed to either identify equipment problems, diagnose equipment problems or to confirm that repair measures have been effective. It can be performed at either the system level, the equipment, or the component level recovery testing and failover testing.

Functional testing – This type of testing ignores the internal parts and focus on the output is as per requirement or not. Black-box type testing geared to functional requirements of an application.

End-to-end testing – Similar to system testing, involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.


Test case prioritization technique



How to determine risk: This is another "it depends" question. Not having any details, there are a few generalized principles.

* If it is a new application developed from scratch, then everything is equal risk and bugs could be anywhere.

* If it is a new application developed from existing components/modules, then risks are at the integration level. Each module may work properly but they may not be re-used in the right context or assembled correctly.

* If it is an existing application that is having new features added, then the new features themselves are the greatest risk

* If it is a maintenance release (bug fixes only) of an existing application, then the validity of the bug fixes is first

To estimate the time taken to test the application?


 No: of uses cases designed for that application , these are straightforward application usages which the customer might be performing .... Believe me there can be many many more use cases which can be derived which the customer might not be very interested ..The idea is to test first those scenarios which are of top priority for the customer. 

2) There might be cases where many test cases might be performing the same operations based on the interrelation with other cases, identify those and make sure you don't over test your application , test rework can be reduced by identifying redundant test cases ... Prepare a traceability matrix it will give you a clear idea of how requirements are reused in different test cases...