Imagine you are running a festive season sale for your online market and you would like to know how your system will behave?, what will be the load time?, response time?, how quickly a user can buy a product or navigate between the pages? when the no. of anticipated users logged in to your website, so if there any performance issues you can uncover early in the cycle and can save ur website to crash and loosing some customers.The companies continue to overlook the importance of performance testing, frequently deploying applications with little or no understanding of their performance, only to be beset with performance and scalability problems soon after the release.

Performance tests reveal how a system behaves and responds during various situations. More importantly, performance testing uncovers what needs to be improved before the product goes to market. Performance testing is done to make sure an app runs fast enough to keep a user’s attention and interest. So the bottom line is Performance impacts the business.

Since an application is made up of many components. At a high level we can define these as the client, the application software, and the hosting infrastructure. The latter includes the servers required to run the software as well as the network infrastructure that allows all the application components to communicate. Increasingly this includes the performance of third-party service providers as an integral part of modern, highly distributed application architectures. The bottom line is that if any of these areas has problems, application performance is likely to suffer.

Performance Helps:

  1. To identify potential bottlenecks of your application
  2. To discover the number of concurrent users that can access your application without a heavy degradation of the user experience
  3. To find out the breaking points of the technology stack used by your application
  4. To find your application’s behavior under load

Good Performance

A well-performing application is one that lets the end user carry out a given task without undue perceived delay or irritation. Performance really is in the eye of the beholder. With a performant application, users are never greeted with a blank screen during login and can achieve what they set out to accomplish without their attention diverting.

How to measure Performance?

Key Performance Indices

a) Availability

The amount of time an application is available to the end user. Lack of availability is significant because many applications will have a substantial business cost for even a small outage. In performance terms, this would mean the complete inability of an end user to make effective use of the application either because the application is simply not responding or response time has degraded to an unacceptable degree.

b) Response time

The amount of time it takes for the application to respond to a user request. In performance testing terms you normally measure system response time, which is the time between the end user requesting a response from the application and a complete reply arriving at the user’s workstation. In the current frame of reference a response can be synchronous (blocking) or increasingly asynchronous, where it does not necessarily require end users to wait for a reply before they can resume interaction with the application. More on this in later chapters.

c) Throughput

For every application there are lots of users performing lots of different requests, so a throughput Indicates the number of transactions per second an application can handle, the amount of transactions produced over time during a test. The rate at which application-oriented events occur. A good example would be the number of hits on a web page within a given period of time.

d) Utilization

The percentage of the theoretical capacity of a resource that is being used. Examples include how much network bandwidth is being consumed by application traffic or the amount of memory used on a web server farm when 1,000 visitors are active.

Pythonic Solution for Performance:

Locust is open source and distributed load testing tool , intend to load test websites. A fundamental feature of locust is that you can describe all your test case in python code. –

This lightweight, distributed and scalable framework helps us to find out how many concurrent users a system can handle by writing test case scenarios in Python code. It can be used for websites,web applications, and web-based services.

b) Locust is completely event based and therefore it is possible to support thousands of users on a single machine. Usually, most load testing tools are thread based and benchmarking thousands of users using thread based tools is not feasible.

In contrast to many other event-based apps it doesn’t use callbacks. Instead it uses light-weight processes, through gevent. Secondly, JMeter is thread-bound. This means for every user you want to simulate, you need a separate thread. Needless to say, benchmarking thousands of users on a single machine just isn’t feasible.

c) No need for clunky UIs or bloated XML, just code as you normally would. Based on coroutines instead of callbacks (aka boomerang code) allows code to look and behave like normal, blocking Python code.

d) Locust supports running load tests distributed over multiple machines. Being event based, even one Locust node can handle thousands of users in a single process.

e) Locust has a neat HTML+JS user interface that shows relevent test details in real-time. And since the UI is web-based, it’s cross-platform and easily extendable.

How to check Performance Result?

a) Response time distribution: This would be the first data that you would be interested to check to know how the response time varies with the growing user in your application or during the peak load when N users are accessing the system, Any Significant variation in response time showcases if there are any long running backend query or login failure of any of the simulated user. So it always recommended to plot the User vs Response time graph with checkpoints for all the scenarios and if there is a peak observed in both the checkpoints and response time then that helps to identify the case of failed login.

b) Throughput: Along with response time, performance testers are usually most interested in how much data or how many use cases can be handled simultaneously. You can think of this measurement as throughput to emphasize how fast a particular number of use cases are handled, or as capacity to emphasize how many use cases can be handled in a particular time period. A sudden reduction in throughput invariably indicates problems and may coincide with errors encountered by one or more virtual users. I have seen this frequently occur when the web server tier reaches its saturation point for incoming requests. Virtual users start to stall while waiting for the web servers to respond, resulting in an attendant drop in throughput. Eventually users will start to time out and fail, but you may find that throughput stabilizes again (albeit at a lower level) once the number of active users is reduced to a level that can be handled by the web servers.

There is some basic statistics knowledge is required to analyze the graphs since many of these tools especially gives lot of details w.r.t the HTTP endpoints that are tested in performance. Some of them are as follows:

a) No. of requests/Endpoints

b) #failures

c) Median/Average Response Time

d)Min/Max Response time

e) Standard Deviation

f) Percentile data

Analyzing the performance results and inferring the health of your Application Under Test needs niche experience and an eye to visualize the underlying performance issues in the app.