PRACTICAL 1: PERFORMANCE TESTING
PRACTICAL
1: PERFORMANCE TESTING
In software engineering,
performance testing is in general, a testing practice performed to determine
how a system performs in terms of responsiveness and stability under a
particular workload. It can also serve to investigate, measure, validate or
verify other quality attributes of the system, such as scalability, reliability
and resource usage.
It is not possible to
generate the load using the actual physical use, so a use of a software is done
so that a simulated environment can be created. Such software will give you
look and feel like actual user exists. 
Example: Load runner, J meter, Capybara.
                                       Figure 1 load runner                               Figure 2-j-meter
Why
to do Performance Testing?
In brief, performance
tests reveal how a system behaves and responds during various situations. A
system may run very well with only 1,000 concurrent users, but how would it run
with 100,000? In terms of performance, we wish to achieve high speed,
scalability, and stability of the system.
SPEED:
To check the speed / response time for the intended User. Sites
that load fastest have a competitive advantage. Since everything on the
internet is just a click away, it is vital to have quick load times to keep
customers on your site and not your competitor’s. “Two hundred and fifty
milliseconds, either slower or faster, is close to the magic number for
competitive advantage on the Web.”
SCALABILITY: Not
only is speed an important goal for performance, but scalability tests are
extremely important if you want more users to interact with the system. How
many more users can you support if you add another CPU to the database server?
How long will the page take to load with this addition?
STABILITY: Is
the application stable under expected and unexpected user loads? (AKA
Robustness)
 Confidence: Are
you sure that users will have a positive experience on go-live day? 
When
to do Performance Testing?
performance testing
should be a major priority before releasing software or an application. It
should be implemented early on in development so as to catch more bugs earlier
and increase user satisfaction while saving you time and money down the line. During
designing phase, Development phase and deployment phase testing should be done
to gain the desired results from our software/websites.
What should be tested?
Ø  High frequency transactions: The most frequently used transactions have the
potential to impact the performance of all of the other transactions if they
are not efficient.
Ø  Mission Critical transactions: The more important transactions that facilitate the
core objectives of the system should be included, as failure under load of
these transactions has, by definition, the greatest impact.
Ø  Read Transactions: At least one READ ONLY transaction should be
included, so that performance of such transactions can be differentiated from
other more complex transactions.
Ø  Update Transactions: At least one update transaction should be included so
that performance of such transactions can be differentiated from other
transactions.
Performance Testing
Process
Below is a generic
performance testing process
Figure 3-Testing Process
- Identify your testing environment - Know your physical
     test environment, production environment and what testing tools are
     available. Understand details of the hardware, software and network configurations
     used during testing before you begin the testing process. 
- Identify the performance acceptance criteria - This includes goals
     and constraints for throughput, response times and resource
     allocation.  It is also necessary to identify project success
     criteria outside of these goals and constraints. Testers should be
     empowered to set performance criteria and goals because often the project
     specifications will not include a wide enough variety of performance
     benchmarks.
- Plan & design performance tests - Determine how usage is
     likely to vary amongst end users and identify key scenarios to test for
     all possible use cases. It is necessary to simulate a variety of end
     users, plan performance test data and outline what metrics will be
     gathered.
- Configuring the test environment - Prepare the testing
     environment before execution. Also, arrange tools and other resources.
- Implement test design - Create the performance
     tests according to your test design.
- Run the tests - Execute and monitor
     the tests.



 
 
Comments
Post a Comment