Can someone explain what exact latency is measured by biolatency tool (bcc)? - linux-kernel

Is it:
Time between the Virtual FileSystem calling the block IO layer to the time it submitting requests to the block device driver?
Time between submitting requests to the block io layer to the time when the request is serviced from the disk?

Just checked the code. It measures from the time it issues the request to the device to completion. Depending on the args passed, it starts measuring from blk_account_io_star() function which tracks the requests when it is first queued in the kernel or the default option is to measure from blk_start_request() which tracks when the disk I/O is issued.

Related

Why is JMeter Result is different to User Experience Result?

We are currently conducting performance tests on both web apps that we have, one is running within a private network and the other is accessible for all. For both apps, a single page-load of the landing page or initial page only takes between 2-3 seconds on a user POV, but when we use blaze and JMeter, the results are between 15-20 seconds. Am I missing something? The 15-20 seconds result came from the Loadtime/Sample Time in JMeter and in Elapsed column if extracted to .csv. Please help as I'm stuck.
We have tried conducting tests on multiple PCs within the office premises along with a PC remotely accessed on another site and we still get the same results. The number of thread and ramp-up period is both set to 1 to imitate a single user only.
Where a delta exists, it is certain to mean that two different items are being timed. It would help to understand on your front end are you timing to a standard metric, such as w3c domComplete, time to interactive, first contentful paint, some other location, and then compare where this comes into play on the drilldown on the performance tab of chrome. Odds are that there is a lot occuring that is not visible that is being captured by Jmeter.
You might also look for other threads on here on how jmeter operates as compared to a "real browser" There are differences which could come into play affecting your page comparisons, particularly if you have dozens/hundreds of elements that need to be downloaded to complete your page. Also, pay attention to third party components where you do not have permission to test their servers.
I can think of 2 possible causees:
Clear your browser history, especially browser cache. It might be the case you're getting HTTP Status 304 for all requests in browser because responses are being returned from the browser cache and no actual requests are being made while JMeter always uses "clean" session.
Pay attention to Connect Time and Latency metrics as it might be the case the server response time is low but the time for network packets to travel back and forth is very high.
Connect Time. JMeter measures the time it took to establish the connection, including SSL handshake. Note that connect time is not automatically subtracted from latency. In case of connection error, the metric will be equal to the time it took to face the error, for example in case of Timeout, it should be equal to connection timeout.
Latency. JMeter measures the latency from just before sending the request to just after the first response has been received. Thus the time includes all the processing needed to assemble the request as well as assembling the first part of the response, which in general will be longer than one byte. Protocol analysers (such as Wireshark) measure the time when bytes are actually sent/received over the interface. The JMeter time should be closer to that which is experienced by a browser or other application client.
So basically "Elapsed time = Connect Time + Latency + Server Processing Time"
In general given:
the same machine
clean browser session
and JMeter configured to behave like a real browser
you should get similar or equal timings for the same page

Dot Trace show waiting for CPU while excuting multiple ASP.NET MVC actions

I am currently trying to improve the performance of my Asp.Net Application. During this I have found out that when I call the same action multiple time or different action within the same controller through ajax call, it takes the unequal amount of time. Please refer below image.
Timeline of request
On digging using Dot trace tool, I found that this difference is being traced as "Waiting for CPU" i.e. task is waiting for thread assignment. How can we optimize this so that all the same actions get equal amount of time to execute their functionality.
Your CPU is at his max capacity. Close unused program to freed some CPU activity

Does a process waiting on a network response take cpu/ram resources?

For example, a Ruby script that sends an HTTP GET request. Whilst waiting/receiving the response, is that process using CPU or RAM resources?
If the response takes 500ms, does that mean thats 500ms CPU/RAM is taken and cannot be used? Or does the process go into a kind of "sleep" state until the request is received, freeing up resources in the mean time?
It doesn't consume your CPU but it will not free the memory that is already allocated. It will just wait (sleep) till the data is available.

infinispan hot rod delay

We are using infinispan hot rod in our application.
Some times the retrieval from cache takes more time .This is not happening consistently . Most of the time it takes 6m sec but at times it takes very long ( 200 msec ) .
The size of the object retrieved from cache is around 200 bytes.
We tested both in infinispn 5.2.1 and JDG 6.3.2
Did anybody face this issue ?
Thanks
Lives
Remember that you're running Java, and that means that garbage collector can fire any time and that will give you 200 ms if you're very lucky, several seconds if you're not and up to minutes if you have large heap and not well tuned GC settings.
As the retrieval from distributed cache requires RPC to another node and handled RPC there, thread scheduling also plays vital role. And in busy system there's no surprise to have the thread waiting.
From Infinispan perspective, there's nothing the retrieval should wait for. The request gets translated into RPC to remote mode, and there it's handled by the same thread that received the message. The request does not wait for any locks.
In JGroups, there may be some delay involved. The message can get lost on network or discarded on receiver if it cannot handle the load, and then it's resent. Also, the UFC protocol makes sure that the receiver speed can match to sender's.
Anything built on top of non-realtime Java works with best effort, and sometimes sh!t happens. 200 ms is still a good response time.

Handling if there is not enough memory available to start this thread c#

I have a system which starts a new thread with each request to the application.
if application received hundreds of requests there may be not enough memory available to start a new thread so it will throw an exception.
I would like to know an ideal mechanism to handle this kind of a situation.
like, if application is receiving lots of request then if there is not enough memory or number of active threads reached the max then i will delay processing other requests.
but i have no idea how to implement this .
Easy solution: Increase thread-pool limits. This is actually viable although out of fashion these days.
More thorough solution: Use a SemaphoreSlim to limit the number of concurrently asynchronously active requests. Make sure to wait asynchronously. If you wait synchronously you'll again burn a thread while waiting. After having waited asynchronously you can resume normal synchronous blocking processing. This requires only small code changes.
Most thorough solution: Implement your processing fully async. That way you never run out of threads.

Resources