Want to measure the performance of a web application for mobile devices (designed in Microsoft PowerApps) from my android and iOS devices. Basically, I am more interested towards the UI performance KPI's and response time from page to page.
Want to perform this from the devices not with simulator. Is there any open source platform which can be installed on the mobile devices to achieve this, any suggestion or any work around for this?
Couldn't find any similar post.
To measure key web vitals
There is a great library by Google to monitor web vitals. It lets you monitor in real time and you can even pipe the info to Google Analytics or your own server.
This monitors
Cumulative Layout Shift (CLS)
First Input Delay (FID)
Largest Contentful Paint (LCP)
First Contentful Paint (FCP)
Time to First Byte (TTFB)
To measure other metrics
As for other metrics they can (nearly) all be derived from window.performance.
This gives you all the information you need to calculate load times etc. (taken from this answer):
connectEnd Time when server connection is finished.
connectStart Time just before server connection begins.
domComplete Time just before document readiness completes.
domContentLoadedEventEnd Time after DOMContentLoaded event completes.
domContentLoadedEventStart Time just before DOMContentLoaded starts.
domInteractive Time just before readiness set to interactive.
domLoading Time just before readiness set to loading.
domainLookupEnd Time after domain name lookup.
domainLookupStart Time just before domain name lookup.
fetchStart Time when the resource starts being fetched.
loadEventEnd Time when the load event is complete.
loadEventStart Time just before the load event is fired.
navigationStart Time after the previous document begins unload.
redirectCount Number of redirects since the last non-redirect.
redirectEnd Time after last redirect response ends.
redirectStart Time of fetch that initiated a redirect.
requestStart Time just before a server request.
responseEnd Time after the end of a response or connection.
responseStart Time just before the start of a response.
timing Reference to a performance timing object.
navigation Reference to performance navigation object.
performance Reference to performance object for a window.
type Type of the last non-redirect navigation event.
unloadEventEnd Time after the previous document is unloaded.
unloadEventStart Time just before the unload event is fired.
The two difficult ones to calculate
The only thing you can't calculate with the above two items is Total Blocking Time (TBT) and Time To Interactive (TTI).
For that you would need to use PerformanceObserver and find all long tasks (greater than 50ms) between First Contentful Paint and Time To Interactive.
Time to Interactive is equally complicated and you would have to do some research on that (maybe start here) as it is quite hard to implement and I don't bother tracking it on live stats.
Or you could just adapt this polyfill for TTI
Related
Since my web application using many AJAX request so categorize as Single Page Application.
what i want is to track AJAX technical performance using Google Analytics.
Regarding to GA document, it suggest to implement Virtual Pageviews Tracking as detail in this link
https://developers.google.com/analytics/devguides/collection/analyticsjs/single-page-applications
After implement virtual pageviews tracking, Pageviews stats and Page URI seem to be feed into GA correctly. But Timing Stats such as Avg.Page Load Time (sec) are not. all of them have no value!
I tried these 3 senario to implement Virtual Page Tracking but non of them is working.
do i miss something ? or it's GA limitation so we can not collect Timing stats of Virtual Page just like Real Pageview ?
any others Tools suggestion to track AJAX performance ?
GA is not meant to be used to track page performance and the Value in ga implies monetary value.
When it says "tracking pageviews" it's not about measuring performance, it's about tracking user activity. As in, how many pages per session, what pages, what led to conversions, where they have troubles going through and so forth. Not a technical tool, but an analytics/marketing tool.
Technically, you still could use it to track page performance and people do it. But not as you've done it. You have to remove any network influence on your timestamps since normal fluctuation there would exceed the useful timing of page performance.
I think the most elegant way of doing it would be creating a custom metric in GA interface and then populate it with performance measuring events (or pageviews). So:
You take a new Date() timestamp (or whatever you do in jquery to get current timestamp) right before the post request
You get another new Date() in the post callback
You calculate the difference in milliseconds and send that as the value of the custom metric with the pageview
You wait for two days for the new data to get processed and build a custom report using your custom metric.
Now when you improve performance of your endpoint, you will be able to see statistical improvements in that report.
This is usually done on the backend though, with the datadog or a similar tool with endpoint monitoring functionality.
When performance is measured on the front-end, we usually use the native performance API, so the window.performance object. Or whatever your front-end rendering library suggests using for that. Here's a bit more on this: https://developer.mozilla.org/en-US/docs/Web/API/performance_property That way you're taking into account a bit more data, not just one endpoint response time.
In my app user types in some content which I would like to auto save as the user types. The save call is not for every keystroke, rather I do autosave only when user pauses for more than 200ms. So in a typical paragraph there are 15-20 server calls. The content will not be read very often, so I need to optimize the writes.
I have to save data on MSSQL Server because of legacy code reasons. I'm getting 10 seconds avg response time in my load test. How do I improve the performance?
One approach I'm considering is instead of directly saving data in mssql I'll save it in Cassandra or redis, then eventually(maybe at regular time intervals) write it to mssql.
Another approach is instead of doing frequent updates, I'll insert new record for each auto save. Then a background process will clean up all records except for latest, every few minutes.
Update:
I replaced the existing logic with simple update calls to 2 tables and now I am seeing improvements. There was a long stored procedure which was taking upto 10 seconds under load. SO for now I have hold on the problem. Still I would like to know is there something I can do on application server layer to reduce frequent DB calls.
It is quite hard to answer yor question directly but here are some hints based on what we do in a multiple active user situation.
If you are writing/triggering on every keystroke, pass the keystroke to a background thread and do not perform the database write, or any network call, while blocking the users typing. A fast typist can hit 20 keystrokes/second, and you cannot afford to introduce latency.
If recording on a web page, you might be able to use localStorage. Do not issue an AJAX style call on every keystroke as there is a limit to outstanding requests. You need to implement some kind of buffered send. Remember that network calls in the real world can be 300mS sort of scale just to traverse the network.
Do you really need to save every keystroke, or is every N seconds acceptable? Every save operation will eventually turn into a disk operation, so you really want to coalesce as many saves as possible. The quickest way to do something is not to do it at all.
If you are recording to a database, then it is often quicker to update an existing row, if you can fetch it by direct key first. Unfortunatly it can sometimes be quicker to insert a new row and clean up excess later. This tends to be true if the table has few indexes. Which is quicker depends on database engine in use and how it is being used. We use both methods.
When using a database keep in mind that they often keep journals of some kind, so if you are updating frequently you might create a large load on the journal files.
If you are using techniques (Using C terminology) like fopen, fwrite these can perform very well, but if you are worried about system failure recovery, you may need to call fsync, which then limits your maximum performance rate. If you need fsync, a database might be better.
You might like to consider writing to a transactionlog table very frequently, and then posting to the real storage every N seconds. For example, if I am typing a customers name I might record every keystroke into a keylog table, and then have a background job read the keylog table and transfer the data to customers table. This helps reduce the operations to the customers table while also allowing the keylog table to be optimised to recording keystrokes. But, at the cost of more code server side.
Overall, you want logic like this
On keyup handler
Add keystroke to background queue
Wake background thread
Background thread
Read/remove ALL data from background queue
If no data, wait for wakeup and repeat
Write to database/network/file etc as one operation. (this can now be syncronous calls)
Optionally some velocity control, simple one is sleep(50mS) or sleep(2s)
Repeat
Keep in mind with the above the user can type and immediately hit close, so your final buffer write might not have flushed yet. You need to handle this.
If you get this correct, the user will not notice any delay. In our usage, we are recording around 1000 keystrokes/sec average, all of which ar routed over private networks to central points. This load is barely a blip, even network monitoring does not see such a small amount of traffic.
Good luck.
I just deployed my Meteor app onto a production server on Digital Ocean.
I noticed that for about 7500 documents, it takes about 3-5 seconds to fully fetch the objects (selectively taking only 3 fields) and populate the autocomplete data. I believe it should rather be instantenous for such number of data, so I am curious how I can debug performance issues from here and optimize more. How should I go about debugging performance issues for a Meteor app? I tried seeing the network tab but nothing seems to take more than a second. I am not sure why it takes 3-5 seconds for the search bar with an autocomplete feature to get ready. After a close inspection, populating autocomplete fields is instantaneous, and the time until subscribe function's callback is called is about 3 to 5 seconds.
I've already looked into Kadira, but it reported that everything was complete within milliseconds, so I am confused.
possibly related: Meteor's subscription and sync are slow
After all, is 3-5 seconds for 7800 documents with 2 fields reasonable?
Let me tell you what's really happening here.
Kadira shows the time taken to fetch the data from the server and queue it to the network. So, 500 - 700 ms is reasonable for that.
So, this 3-5 ms latency is the network latency. That means the time taken to send data to the client via the network. It's quite okay for 7500+ documents even with three fields over DDP.
So, my suggestion is to do the search on the server and use something like Search Source for that.
With that, you'll get the only data required to the client. Which reduce the latency and saves the CPU of your app.
I'm interested in finding out why this is used on some Web sites for processing user-initiated search submissions, how it affects the request and response flow, and programmatically why it would be necessary (or beneficial). In an MVC framework it seems difficult to execute since you are injecting another page into the middle of the flow.
EDIT:
Not advertising related. For instance, most travel sites used to do this, and there were no ads... some banking sites do it too, where there is just a loader that says something like "Please wait while we process your transaction...".
It is often used in long running requests to prevent the web server from timing out the request. With an interstitial page, you are able to continuously refresh the page until you get results back.
EDIT:
Also, for long running requests, it is beneficial to have a "Loading.." page in order to show the user that something is happening. Without the interstitial page, the request can appear to have hung up if it takes too long.
To supplement what HVS said, interstitials which appear before, say a homepage loads, are very much there for the purpose of advertising, we've all seen the 'close this ad' link.
One instance where they can be helpful from a user experience point of view is when a user initiates an action which requires feedback from a process which may take some time to respond - either because it's slow, busy or just has a lot of processing to do.
Think of a site where you book a flight online for example. You often get an interstitial on hitting 'find flights' because the the system is having to go off and ask for all relevant flight information and then sort them for you before displaying them on your screen. If this round-trip of 'request, interrogate, return, display' is likely to take an amount of time beyond that which a normal webpage transitions from one to the next, a UXDesigner may consider an interstitial screen (or message) to let the user know something is happening whilst at the same time allowing the system the time it needs to complete the request. Any screen with this sort of face-time is going to get the attention of your marketing department from a 'well while we've got them we might as well show them something' point of view.
As a UX Designer myself interstitials like this are not always preferred as I'd love every system to return data immediately but if it can't for whatever reason, I'm very much for keeping the user in the loop as much as possible about what is happening - rather than leaving them to stare at the browser status bar until they either try again or get fed up and leave.
One final point when considering this is also to have a lower and upper time limit on a screen like this. If you need to show an interstitial, show it for long enough so people can read it and understand it but not too long that they get fed up of waiting. As a rough guide, leave it open for at least 3-4 seconds (even if the process averages 4 seconds but has finished after 1 on this occasion). Between 4 and 10 seconds check every second to see if the process has responded (and then take the user to the next page f it has) and after 10 seconds seriously consider telling the user to either try again or telling them you've failed (whilst at the same time getting your tech team to fix what is ultimately a problem which will affect your bottom line).
I believe the vast majority of interstitial pages are there to run advertising.
I'm currently trying to build an application that inherently needs good time synchronization across the server and every client. There are alternative designs for my application that can do away with this need for synchronization, but my application quickly begins to suck when it's not present.
In case I am missing something, my basic problem is this: firing an event in multiple locations at exactly the same moment. As best I can tell, the only way of doing this requires some kind of time synchronization, but I may be wrong. I've tried modeling the problem differently, but it all comes back to either a) a sucky app, or b) requiring time synchronization.
Let's assume I Really Really Do Need synchronized time.
My application is built on Google AppEngine. While AppEngine makes no guarantees about the state of time synchronization across its servers, usually it is quite good, on the order of a few seconds (i.e. better than NTP), however sometimes it sucks badly, say, on the order of 10 seconds out of sync. My application can handle 2-3 seconds out of sync, but 10 seconds is out of the question with regards to user experience. So basically, my chosen server platform does not provide a very reliable concept of time.
The client part of my application is written in JavaScript. Again we have a situation where the client has no reliable concept of time either. I have done no measurements, but I fully expect some of my eventual users to have computer clocks that are set to 1901, 1970, 2024, and so on. So basically, my client platform does not provide a reliable concept of time.
This issue is starting to drive me a little mad. So far the best thing I can think to do is implement something like NTP on top of HTTP (this is not as crazy as it may sound). This would work by commissioning 2 or 3 servers in different parts of the Internet, and using traditional means (PTP, NTP) to try to ensure their sync is at least on the order of hundreds of milliseconds.
I'd then create a JavaScript class that implemented the NTP intersection algorithm using these HTTP time sources (and the associated roundtrip information that is available from XMLHTTPRequest).
As you can tell, this solution also sucks big time. Not only is it horribly complex, but only solves one half the problem, namely giving the clients a good notion of the current time. I then have to compromise on the server, either by allowing the clients to tell the server the current time according to them when they make a request (big security no-no, but I can mitigate some of the more obvious abuses of this), or having the server make a single request to one of my magic HTTP-over-NTP servers, and hoping that request completes speedily enough.
These solutions all suck, and I'm lost.
Reminder: I want a bunch of web browsers, hopefully as many as 100 or more, to be able to fire an event at exactly the same time.
Let me summarize, to make sure I understand the question.
You have an app that has a client and server component. There are multiple servers that can each be servicing many (hundreds) of clients. The servers are more or less synced with each other; the clients are not. You want a large number of clients to execute the same event at approximately the same time, regardless of which server happens to be the one they connected to initially.
Assuming that I described the situation more or less accurately:
Could you have the servers keep certain state for each client (such as initial time of connection -- server time), and when the time of the event that will need to happen is known, notify the client with a message containing the number of milliseconds after the beginning value that need to elapse before firing the event?
To illustrate:
client A connects to server S at time t0 = 0
client B connects to server S at time t1 = 120
server S decides an event needs to happen at time t3 = 500
server S sends a message to A:
S->A : {eventName, 500}
server S sends a message to B:
S->B : {eventName, 380}
This does not rely on the client time at all; just on the client's ability to keep track of time for some reasonably short period (a single session).
It seems to me like you're needing to listen to a broadcast event from a server in many different places. Since you can accept 2-3 seconds variation you could just put all your clients into long-lived comet-style requests and just get the response from the server? Sounds to me like the clients wouldn't need to deal with time at all this way ?
You could use ajax to do this, so yoǘ'd be avoiding any client-side lockups while waiting for new data.
I may be missing something totally here.
If you can assume that the clocks are reasonable stable - that is they are set wrong, but ticking at more-or-less the right rate.
Have the servers get their offset from a single defined source (e.g. one of your servers, or a database server or something).
Then have each client calculate it's offset from it's server (possible round-trip complications if you want lots of accuracy).
Store that, then you the combined offset on each client to trigger the event at the right time.
(client-time-to-trigger-event) = (scheduled-time) + (client-to-server-difference) + (server-to-reference-difference)
Time synchronization is very hard to get right and in my opinion the wrong way to go about it. You need an event system which can notify registered observers every time an event is dispatched (observer pattern). All observers will be notified simultaneously (or as close as possible to that), removing the need for time synchronization.
To accommodate latency, the browser should be sent the timestamp of the event dispatch, and it should wait a little longer than what you expect the maximum latency to be. This way all events will be fired up at the same time on all browsers.
Google found the way to define time as being absolute. It sounds heretic for a physicist and with respect to General Relativity: time is flowing at different pace depending on your position in space and time, on Earth, in the Universe ...
You may want to have a look at Google Spanner database: http://en.wikipedia.org/wiki/Spanner_(database)
I guess it is used now by Google and will be available through Google Cloud Platform.