Postman requests are getting faster and faster - spring

Every postman requests spent less and less time
This is the first request
And this is second.
Why does it work like this? I saw this when I was writing spring application and now the same with quarkus.

Similar to running your JSP pages when running a Spring application, at the first time, everything is getting compiled, built, rendered, and presented, and many other operations, but if you hit refresh for that JSP page again, it will happen much faster because it does not compile, build and rendered every time, it is just being present.
This is just a very simple example to get an idea, you are probably not using JSP pages, but the concept is still the same.
Basically, first requests in theory do like 10 tasks, but after everything is done, the number of tasks is much lower, hence the speed.

Related

Web forms: Go back in history without refreshing page

Is it possible to go back in a page without reloading it?
I am developing a Web Forms website and every time a go back in history, the page reloads (and takes a long time).
Following is the curl of the page:
Honestly, no.
The life cycle of a Web Form is very specific and the page goes through it every time it is run (that is every time you request it through your browser).
On the other hand, you can always optimize your page to make it load faster. How you do it depends on many things one of which is what code runs on the server side upon loading and if any portions of that code can be either optimized for speed or moved in event handlers to be executed at a later point in time. For example, if you're fetching data from a database when your page loads consider applying paging to narrow the number of selected rows.
Please, feel free to ask a new question if you decide to take that course of action.

Please help resolve bottle neck in wait times for Http Responses?

As far as a performance issue, the server is performing fine. With the exception of the http response wait times. This will become more of an issue as we grow our line of online services. All things being equal, I’m confused how this new server is it not loading pages as quickly as an older server running multiple websites, logging, etc…
Here is a screen shot from http://www.gtmetrix.com the online testing tool I’ve been using. These results are consistent regardless of time of day, The numbers here don’t make sense. The new site page is 75% smaller, yet its total time to live is only 26ms faster. In the below image the left side is NEW SERVER, the right side is OLD SERVER
The left portion of the timeline is the Handshaking portion. So, you can see, the new server, is about the same speed. The purple middle section, that represents wait time. It’s about 4 times the delay in milliseconds as OLD SERVER. The Grayish section on the right represents the actual time to download the file. You will also notice that the new server is significantly faster at downloading the response, this is most likely due to the 75% decrease in the response size.
You can see the complete results for the new server here. http://gtmetrix.com/reports/204.193.113.47/Kl614UCf
Here’s a table of the differences that I’m aware of, let me know if you see one that could be the culprit. I forgot to add this to the table, but the old server, is in production, right now serving requests, when www.gtmetrix is hitting it. In contrast, to my New server, which is just me connecting and generating requests.
My current hypothesis, is that the slowness is caused some combination of the server being virtualized, incorrect IIS settings, or the difference between 32bit and 64bit OSes
OK...
The server in in Sarasota(?), the test agent is in Vancouver so roughly 4,356KM apart (as the crow flies) so the best round trip time you could hope for is around 45ms.
Given it won't be a direct route and things like routers etc. will that add latency then the 155ms round-trip you seem to be getting is pretty reasonable.
Looking at the request for the HTML page the 344ms to complete it a pretty good time - basically 114ms to set up the connection, 115ms to receive the first bytes from server and then 155ms to get the complete response.
Unless you get decrease the roundtrip time then this time isn't going to improve much - have you tried testing from gtmetrix's Dallas server as a comparison?
If it is a slow server response then something like PAL (http://pal.codeplex.com/) is worth using as a first look to see what's happening on the server but I'd also look how quickly the SQL server is responding to the queries that are used on the test page.
A couple of things you want to look at later in the waterfall...
For the two files that are hosted from ajax.aspnetcdn.net it takes longer to resolve their DNS name than it does to download them so you may want to consider hosing them yourself
For the text based content e.g. HTML, CSS, JS etc. what level of gzip compression are you applying and are the compressed files being cached on the server? (the server times for them look a bit long)
Looking at the complete results, it seems the lower bound for the wait times would be 115ms. Not a single request is faster, most are around 125ms, and judging from the requested resources, there's a lot of static resources as well, so serving the response should not involve a lot of CPU. Even though responses are as small as 123 bytes, there's still this delay.
So it looks like a general issue, possibly not even related to IIS. Here some ideas how I'd try to debug this.
How long does a ping roundtrip take? (i.e. Is it a general network issue, routing etc.?)
How long do HTTP requests take when done from the server box (e.g. to localhost)? (If they all take more than ~100ms, start profiling inside the server box)

in Spring MVC, how to find out page load time?

I'm trying to display a server response time on the page similar to google's search time, something like "page loaded in about 1.3 seconds" or so.
What is the best way of achieving this? I currently have a MVC framework setup, and my initial approach was to store initial time in the controller, and pass it as the model into the view and it is up to view to calculate the time elapsed.
Somehow I feel there must be a better approach, that, for all requests, context might already have the information recorded, either the request start time, or the elapsed time.
Can someone please verify if my original thought was right? or there exists an already implemented solution?
Thanks,
Jason
If you wait all the way until your controller, you're potentially missing a lot of the "load time". You want to use a Filter to time the request from as early to as late in the process as possible. The Java EE Tutorial has more details on writing Filters. There's also another SO answer that deals with exactly this:
In spring MVC, where to start and end counter to test for speed execution time?
You could use a servlet filter to store the start time in a request attribute for each request (or at least for each request to a page), and compute the elapsed time at the end of your view execution.
If you use a template engine like Tiles or SiteMesh, this elapsed time computation would be called in a single place: the page template.

Troubleshoot Asp.net MVC 3 Controler performance

We have a Asp.net MVC 3 application with 3 areas, Unity dependency injection, about 20 routes. The total time to render the page is very inconstant. The biggest problem seems to be the amount of time it takes to start the action method within the controller. Even when viewing the same url. Sometimes the action is started within 100 milliseconds sometimes its greater than a second, this happens in all environments from development to production.
Does anybody have some fresh things to try?
Check out MvcMiniProfiler.
It will allow you to measure the time it takes to render any portion of the action method you specify.
Not sure what you mean by "time it takes to start the action method".
Maybe there is some rogue action filters going on?
Sorry, but this is a large problem with having a huge singleton (unity) around implementing IDependencyResolver. I would bet that you are leaking memory.
edit
In response to your comment:
The reason that memory leak or the DI Container struck me as the issue is because there should be really no time inbetween the controller firing up and an action firing up as they are very close to each other. A simple way to test if it is a memory leak is to let the application sit unentered for a good amount of time (30 minutes to 2 hours) and then attempt to revisit it. If this is quick at first then that could indicate a memory leak. If it is slow on first request then perhaps it is something else. If memory leaking is not the issue then perhaps it is something easier. You said it is before the controller finishes so I would rule out rendering the view (which can take some time). Something you said makes me wonder about your web.config file. "this happens in all environments from development to production." Perhaps your production environment is still running under debug=true. These are all the ideas I could think of at the moment.

Automatically rebuild cache

I run a Symfony 1.4 project with very large amount of data. The main page and category pages are using pagers which need to know how much rows are available. I'm passing a query which contains joins to the pager which leads to a loading-time of 1 minute on these pages.
I configured cache.yml for the respective actions. But I think the workaround is insufficient and here are my assumptions:
Symfony rebuilds the cache within a single request which is made by a user. Let's call this user "cache-victim" to simplify things.
In our case, the data needs to be up-to-update - a lifetime of 10 minutes would be sufficient. Obviously, the cache won't be rebuilt, if no user is willing to be the "cache-victim" and therefore just cancels the request. Are these assumptions correct?
So, I came up with this idea:
Symfony should fake the http-request after rebuilding the cache. The new cache-entries should be written on a temporary file/directory and should be swapped with the previous cache-entries, as soon as cache rebuilding has finished.
Is this possible?
In my opinion, this is similar to the concept of double buffering.
Wouldn't it be silly, if there was a single "gpu-victim" in a multiplayer game who sees the screen building up line by line? (This is a lop-sided comparison, I know ... ;) )
Edit
There is no "cache-victim" - Every 10 minutes page reloading takes 1 minute for every user.
I think your problem is due to some missing or wrong indexes. I've a sf1.4 project for a large soccer site (i.e. 2M pages/day) and pagers aren't going so slow even if our database has more than 1M rows these days. Take a look at your query with EXPLAIN and check where it is going bad...
Sorry for necromancing (is there a badge for that?).
By configuring cache.yml you are just caching the view layer of your app (that is, css, js and html) for REQUESTS WITHOUT PARAMETERS. Navigating the pager obviously has a ?page=X on the GET request.
Taken from symfony 1.4 config.yml documentation:
An incoming request with GET parameters in the query string or submitted with the POST, PUT, or DELETE method will never be cached by symfony, regardless of the configuration. http://www.symfony-project.org/reference/1_4/en/09-Cache
What might help you is to cache the database results, but its a painful process on symfony/doctrine. Refer to:
http://www.symfony-project.org/more-with-symfony/1_4/en/08-Advanced-Doctrine-Usage#chapter_08_using_doctrine_result_caching
Edit:
This might help you as well:
http://www.zalas.eu/symfony-meets-apc-alternative-php-cache

Resources