Field Data vs Origin Summary - performance

I have measured performance for a handful of my sites. Some are larger and get more traffic than others. One of my smaller sites, which I am aware does not get a lot of traffic, does not show field data metrics but does show origin summary metrics.
If the origin summary is an aggregate data measurement and field data comes from CrUX, what is the difference?enter image description here Isn't the CrUX report and aggregate of the same numbers and metrics where the origin summary is getting its data?

Field data is for specific page.
Origin Summary is for entire domain.
So even though it is fetching from CrUX report it does not sufficient number of distinct samples that provide a representative, anonymized data for that page.

Related

Why I miss some data on querying of Chrome UX Report API?

On querying of Chrome UX Report API i get sometimes a 404 error, "chrome ux report data not found". Documentation says: If 404 - CrUX API doesn't have any data for given origin.
For all URLs I query, I get some metrics, there is no URL, where all metrics would be missed, and for most URLs I get all data.
But there are cases, where data of certain metric missed. For one URL is FID data missing (data for all other metrics exist), for another URLs - FID, LCP and CLS are missed (data for FCP exist).
Is it a kind of API glitch? What should I do to get data for all queried metrics?
PS: if i query the same URLs now and after 30 minutes, I get different results: for same URLs are different metrics data missed: at first query is FCP missed, at second query - LCP and CLS... Why is it so?
On the image you see how missed data looks:
FCP is the only metric guaranteed to exist. If a user visits a page but it doesn't have an FCP, CrUX throws it away. It's theoretically possible for some users to experience FCP but not LCP, for example if they navigate away in between events. Newer metrics like CLS weren't implemented in Chrome until relatively recently (2019) so users on much older versions of Chrome will not report any CLS values. There are also periodic metric updates and Chrome may require that metrics reflect the latest implementation in order to be aggregated in CrUX.
The results should be stable for roughly 1 full day. If you're seeing changes after only 30 minutes, it's possible that you happened to catch it during the daily update.

Is there any difference in metrics when Querying the data using Eloqua API vs Getting a report from Eloqua Insights?

I am validating the data from Eloqua insights with the data I pulled using Eloqua API. There are some differences in the metrics.So, are there any issues when pulling the data using API vs .csv file using Eloqua Insights?
Absolutely, besides undocumented data discrepancies that might exist, Insights can aggregate, calculate, and expose various hidden relations between data in Eloqua that is not accessible by an API export definition.
Think of the api as the raw data with the ability to pick and choose fields and apply a general filter on those, but Insights/OBIEE as a way to calculate that data, create those relationships across tables of raw data, and then present it in a consumable manner to the end user. A user has little use with a 1 gigabyte csv of individual unsubscribes for the past year, but present that in several graphs on a dashboard with running totals, averages, and timeseries, and it suddenly becomes actionable.

Exporting time series response data for VS2013 load tests

I am trying to figure out how to export and then analyze the results of a load test, but after the test is over it seems I cannot find the data for each individual request by url. This data shows during the load test itself, but after it is over it seems as if that data is no longer accessible and all I can find are totals. The data that I want is under the "Page response time" graph on the graphs window during the test. I know this is not the response time for every single request and is probably averaged, but that would suffice for the calculations I want to make.
I have looked in the database on my local machine (LoadTest2010, where all of the summary data is stored) and I cannot find the data I'm looking for. I am load testing a single page application, fyi.
My goal is to plot (probably in excel) each request url against the user load and analyze the slope of the response time averages to determine which requests scale the worst (and best). During the load test I can see this data and get a visual idea but when it ends I cannot seem to find it to export.
A) Can this data be exported from within visual studio? Is there a setting required to make VS persist this data to the database? I have, from under Run Settings, the "Results" section "Timing Details Storage" set to "All individual details" and the Storage Type set to "Database".
B) Is this data in any of the tables in the LoadTest2010 database where all of the summary data is stored? It might be easier to query manually if its not spread out overly, but all I was able to find was summary data.
I was able to find the data in the database that I wanted. The tables I needed were the WebLoadTestRequestMap (which has the request URI's in it) and the LoadTestPageDetail (which has the individual response times themselves). They can be joined on webloadtestrequestmap.requestId and loadtestpagedetail.pageId (unintuitively).
I do have the "Results" section "Timing Details Storage" set to "All individual details" and the Storage Type set to "Database", it did not seem like every load tests results were available, maybe because of this setting.
More data on the layout of the load test database here: http://blogs.msdn.com/b/slumley/archive/2010/02/12/description-of-tables-and-columns-in-vs-2010-load-test-database.aspx

Are Kendo DataSources able to handle data response sizes that differ from PageSize?

We have noticed that dataSources can drop rows in a few scenarios. After parsing the data from the server while it's inserting stuff into the dataSource._ranges[] array you can notice that not all of the rows that were received by dataSource:parse() and dataSource:data() are available in the ranges.
If the data response size is less than the pageSize() value then the grid has problems scrolling and paging, and upon scrolling, the dataSource() will continually request (page 1) and (page 2) over an over again.
If the data response size is larger than the pageSize() it might work, but we are unsure. We have also noticed that if the data response size is 2x the pageSize() then we are more likely to have an issue with data finding it's way in the ranges.
The server is aggregating data from several services and it's hard to predict the number of records that will be returned.
Should kendo support server responses that have more or less data than the page size suggests?
If the response size differs from the page size sent in the request, the server is not returning what is being requested of it so I would say all bets are off. I would expect the client to ignore extra data since it didn't ask for it. Is your server taking the requested page size into account when it's generating its response? I can see where it would return less data but should never return more data than requested.

How to handle large dataset in d3js

I have a data set of 11 MB. It's slow to load it every time the document is loaded.
d3.csv("https://s3.amazonaws.com/vidaio/QHP_Individual_Medical_Landscape.csv", function(data) {
// drawing code...
});
I know that crossfilter can be used to slice-and-dice the data once it's loaded in browser. But before that, dataset is big. I only use an aggregation of the data. It seems like I should pre-process the data on server before sending it to client. Maybe, use crossfilter on server side. Any suggestion on how to handle/process large dataset for d3?
Is your data dynamic? If it's not, then you can certainly aggregate it and store the result on your server. The aggregation would only be required once. Even if the data is dynamic, if the changes are infrequent then you could benefit from aggregating only when the data changes and caching that result. If you have highly dynamic data such that you'll have to aggregate it fresh with every page load, then doing it on the server vs. the client could depend on how many simultaneous users you expect. A lot of simultaneous users might bring your server to its knees. OTOH, if you have a small number of users, then your server probably (possibly?) has more horsepower than your users' browsers, in which case it will be able to perform the aggregation faster than the browser. Also keep in mind the bandwidth cost of sending 11 MB to your users. Might not be a big deal ... unless they're loading the page a lot and doing it on mobile devices.
Try simplifying the data (also suggested in the comment from Stephen Thomas)
Try pre-parsing the data into json. This will likely result in a larger file (more network time) but have less parsing overhead (lower client cpu). If your problem is the parsing this could save time
Break the data up by some kind of sharding key, such as year. Limit the to that shard and then load up the other data files on demand as needed
Break up the data by time, but show everything in the UI. load the charts on the default view (such as most recent timeframe) but then asynchronously add the additional files as they arrive (or when they all arrive)
How about server side (gZip) compression. should be a few kb after compressing and browser will de-compress on the background.

Resources