Monthly Archives: September 2015

Through Put vs Response Time

Response Time – Amount of time system takes to process a request after it has received one.

Latency – In simplest terms this is Remote Response time. For instance, you want to invoke a web service or access a web page. Apart from the processing time that is needed on the server to process your request, there is a delay involved for your request to reach to server. While referring to Latency, it’s that delay we are talking about. This becomes a big issue for a remote data center which is hosting your service/page. Imagine your data center in US, and accessing it from India. If ignored, latency can trigger your SLA’s. Though it’s quite difficult to improve latency it’s important to measure it. How we measure Latency? There are some network simulation tools out there that can help you – one such tool can be found here.

Throughput – Transactions per second your application can handle (motivation / result of load testing). A typical enterprise application will have lots of users performing lots of different transactions. You should ensure that your application meets the required capacity of enterprise before it hits production. Load testing is the solution for that.

 

 

Think about a garden hose. The wider the hose the more water can come out. You would probably measure that _throughput_ in gallons per minute. A fire hose has more throughput than a garden hose.

Now think about the amount of time it takes from the time you turn on the spigot until the water comes out the end of the hose. You could call that _response time_. You would probably measure that in seconds.

The length of the hose will affect the response time. If it’s clogged or kinked somewhere the throughput would be decreased.

 

 

References :