Any One who Knows are Core web Vitals Details Please help me out with some points.
How Origin Summary is Different from Lab Data?
How Speed Insight Get Origin Summary. Is this the aggregate rating of the last 28 days of the same page or similar pages?
I Checked Website Category pages that have the same LCP in all of them and in the search console they are inside one section. This is a bit confusing for me to understand how all of these pages have the same Origin Summary but different Lab Data.
I have attached a screenshot for you to see what is confusing me. see LCP in Lab Data is 6.6s and Origin Summary is showing 3.6s. How it works?????
Origin Summary
The timing number / value shown is the average of all of the available real world data for that domain, across multiple pages.
This is a rolling 28 day average for the pages for which it has sufficient data.
The bars show aggregated data so you can see how many visitors fall into each category (red = poor, orange = ok, green = good).
As this is an average across all of the data in the Chrome User Experience (CrUX) data set you can see a low average but still have one page that performs poorly / a certain screen size that performs poorly (which is why they include the aggregate bars as the synthetic test is only at one desktop and one mobile resolution or you may only test the home page and other pages perform poorly).
SCORING: This has no bearing on your score and is for information only / to help you identify issues that the synthetic test may not pick up.
Field Data
If the page being tested had more traffic you might get a break-down by page of the real world data (this would show as "Field Data"). You do not have enough traffic yet for this which is why you only get the origin summary.
The same way that origin data is aggregated and averaged if the page has enough data in the CrUX data-set you will see an average time / value for that page only. This is in addition to the origin summary data.
SCORING: This has no bearing on your score and is for information only / to help you identify issues that the synthetic test may not pick up.
Lab Data
This is the data from the synthetic test you just ran. This is where your score comes from.
The origin summary and field data have no bearing at all on the score you see here, they are purely informational.
The score and subsequent action points are generated "on the fly" based on this test run.
SCORING: The score you see when you run an audit is calculated from the data gathered on that run. No Origin data or Field Data is used in this calculation.
Example
In the example you gave LCP in lab data (synthetic) is 6.6s and Origin data (real world) is 3.6s.
To understand how this can be the case let's say you have a page that is in the CrUX dataset that Google has three real-world LCP values for. 2s, 3.9s and 4.9s.
Google would then give you the aggregate bars for that page (2s = good, 3.9s = needs improvement and 4.9s = poor) of 33% green (good), 33% orange (needs improvement) and 33% red (poor) based on the LCP scoring.
These would be your origin summary bars.
The time that is displayed for your origin summary time would be 3.6s - the average of those three values in the CrUX dataset (((2 + 3.9 + 4.9) / 3) = 3.6.
As for your Lab Data at 6.6s the test has loaded the page with throttling applied to represent a 4G connection on a mid-tier mobile phone. It then uses the performance data it gathers to calculate the LCP time.
If you made improvements to the page and re-ran the report the LCP time could drop instantly, as it is based on each run, whereas your origin summary data would take 28 days to completely change to reflect the change.
So if my data is not in the CrUX dataset how can I identify poorly performing pages.
Let's assume you have a page that does well in the Lighthouse synthetic test but it performs poorly in the real world at certain screen sizes.
Let's also assume there isn't enough data for it to show Field Data for that page.
How can you find a page that is ruining your origin summary?
For that you need to gather Real User Metrics (RUM) data.
RUM data is data gathered in the real world as real users use your site and stored on your server for later analysis / problem identification.
There is an easy way to do this yourself, using the Web Vitals Library.
This allows you to gather CLS, FID, LCP, FCP and TTFB data, which is more than enough to identify pages that perform poorly.
You can pipe the data gathered to your own API, or to Google Analytics for analysis.
This is the best way to gather page-specific data when there isn't any / enough data in the CrUX dataset for you to analyse.
Origin summary is the aggregate experience of all the pages on your website while Lab data is specific to the page you entered in the field.
This means the LCP in the origin summary is aggregate LCP of all the pages on your website.
Example: let's say, you have 3 pages on your site with LCP on each page as 2, 5, and 7. This means the avg LCP of your website is 4.6s. Now this makes it possible to have a higher LCP: 7 in this case.
Refer the example mentioned above on how LCP is calculated. It is the aggregate rating of the last 28 days of all the published pages you have on a domain name.
Are you checking just the similar pages?
Related
I'm gathering data from load sensors at about 50Hz. I might have 2-10 sensors running at a time. This data is stored locally but after a period of about a month it needs to be uploaded to the cloud. The data during this one second can vary quite significantly and is quite dynamic.
It's too much data to send because its going over GSM and signal will not always be great.
The most simplistic approach I can think of is to look at the 50 data points in 1 sec and reduce it to just enough data to make a box and whisker graph. Then, the data stored in the cloud could be used to create dashboards that look similar to how you look at stocks. This would at least show me the max, min, average and give some idea around the distribution of the load during that second.
This is probably over simplified though so I was wondering if there was a common approach to this problem in data science... take a dense set of data and reduce it to still capture the highlights and not lose its meaning.
Any help or ideas appreciated
i have a dashboard that shows a sum of requests/sec with from a windows performance monitor collected by prometheus.
sum(Total_Query_Received_persec)
I would like to see any issues right away if those request/sec drop ( which will indicate an issue)
So the singlestat panel could change color if the number of request/sec is 50% less than the same number collected 10 minutes ago (for example), change panel coloring to yellow and if the number is 80% less than 10 minutes ago change color to Red.
I know that you can configure this based on thresholds, but not sure if there is a way to query that info in the metric.
Is this possible at all?
Thanks
I'm not familiar enough to grafana to provide all the details of handling the color change scenarios with that tool, but within prometheus the query you are interested in can likely be handled with the irate operator. It's only recommended for working with 'fast moving' counters, and the documentation mentions that you should track the irate() internal to a sum() to keep from hiding the volatility from the function.
You might also get perfectly acceptable performance and results from aggregating the detail with rate directly, such as rate(total_query_received_persec)[10m]
I ran a performance test run for 500VU by using "jp#gc - Stepping Thread Group". I noticed that, right from 200VU-500VU Load, the hits/sec was 20-25 consistently for 25min till the end of the run, error 0.04%.
I know that I could control the hits/sec by using limit RPS and constant throughput timer and I didn't apply or enable.
My questions are
1. was the run Good or Bad?
2. How should be the hits/sec for 500VU load?
3. Was the hits/sec are determined by Blazemeter engine based upon its efficiency?
Make sure you correctly configure Stepping Thread Group
If you have the same throughput for 200 and 500 VU it is no good. On ideal system throughput for 500 VU should be 2.5 times higher. If you are uncertain whether it is your application or BlazeMeter engine(s) to blame, you can check BlazeMeter's instances health during the test time frame on Engine Health tab of your load test report.
As far as I'm aware, BlazeMeter relies on JMeter throughput calculating algorithms. According to JMeter glossary
Throughput is calculated as requests/unit of time. The time is calculated from the start of the first sample to the end of the last sample. This includes any intervals between samples, as it is supposed to represent the load on the server.
The formula is: Throughput = (number of requests) / (total time).
Hits are not always the best measurement of throughput. Here is why: The number of hits can be drastically altered by settings on the servers for applications under test related to the management of cached items. An example: Say you have a very complex page with 100 objects on the page. Only the top level page is dynamic in nature, with the rest of the items as page components such as images, style sheets, fonts, javascript files, etc,.... In a model where no cache settings are in place you will find that all 100 elements have to be requested, each generating a "hit" in both reporting and the server stats. These will show up in your hits/second model.
Now optimize the cache settings where some information is cached at the client for a severely long period ( logo images and fonts for a year ), some for the build interval term of one week (resident in the CDN or client) and only the dynamic top level HTML remains uncached. In this model there is only one hit generated at the server except for the period of time immediately after the build is deployed when the CDN is being seeded for the majority of users. Occasionally you will have a new CDN node come into play with users in a different geographic area, but after the first user seeds the cache the remainder are pulling from CDN and then cached at the client. IN this case your effective hits per second drop tremendously at both CDN and Origin servers, especially where you have return users.
Food for thought.
Hi fellow GSA developers,
Just wanted to know, in your experience, what model of GSA are you using and how much concurrent search request load does your appliance serve successfully. And the number of total documents you have.
I know each and every environment is different but one can proportionate the data and understand the capability of the GSA Black Box.
I'm calling GSA, a black box, since you can never find out the Physical memory or any other hardware spec, nor can you change it. The only way to scale is to buy more boxes :)
Note: The question is about the GSA as a search engine and not from the portal perspective. In the sense, I'm just concerned about GSA's QPS rather than custom portal's QPS. Since custom portal, well they are custom and they are as good as it's design.
We use two GSAs with Software Version 7.2 and arranged them in a GSA^n "cluster". In the index are ca. 600,000 documents and as all of them are protected the GSA has to spend quite a lot of effort on determining which user is allowed to see which document.
Each of the two GSAs is guaranteed to perform 50 queries per second. We once did a loadtest and as some of the queries were completed in less than a second and thereby freed up the "slot" for incoming queries we were able to process 140 queries per second for a noticeable long time.
99% of our queries are completed in less than a second and as we have a rather complex structure of permissions (users with lots of group memberships) I would say this is a good result.
Like #BigMikeW already said: to get your own figures you should do a load test. Google Support provides a script which can exhaust the GSA and tell you at which QPS rate it started failing (it will simply return a http status code of 500 something).
And talking of "black box": you are able to find out the hardware specs. All of the GSAs I have seen so far (T3 and T4) have a dell Service tag. When you enter that tag at Dell you will find out what is inside the box. But that's pointless, because you can't modify anything of this ;-) It only will become interesting if you use a GSA model that can be repurposed.
This depends on a lot of factors beyond just what model/version you have.
Are the requests part of an already authenticated session?
Are you using early or late binding?
How many authentication mechanisms are you using?
What's the flex authz rule order?
What's the permit/deny ratio for the results?
Any numbers you get in response to this question will have no real meaning to any other environment. My advice would be load test your own environment and use those results for capacity planning.
With the latest software, the GSA has 50 threads dedicated for search responses. This means that it can be responding to 50 requests at any given time. If the searches take on average .5 seconds, this will mean that you can average about 100 qps.
If they take longer...you'll see this be reduced. The GSA will also queue up a few requests before responding with the appropriate http response saying the server is overloaded.
I'm developing a mobile app with 40 million users per day.
The app will show articles to the user that they can choose to read, no image, just plain text. The user can pull to refresh to get new articles.
I would like to implement the like button to each individual article (my own like button not Facebook). Assume that each client click 100 like per person per day it will be equal to 40M x 100 = 4000 M time of data transfer.
I'm a newbie with no experience with big project before. What is the best approach that suit my project. I found Google Channel API is 0.0001 dollars per channel created which is 80M x 0.0001 = 8000 USD per day (assume there are 2 connection per person) which is quite expensive. Or there is other way to do this? ex. Ajax or Traditional post. My app don't need real-time. which one is less resource consumption? Can someone please guide me. I really need help.
I'm plan to use google app engine for this project.
A small difference in efficiency would multiply to a significant change in operational costs at those volumes. I would not blindly trust theoretical claims made by documentation. It would be sensible to build and test each alternative design and ensure it is compatible with the rest of your software. A few days of trials with several thousand simulated users will produce valuable results at a bearable cost.
Channels, Ajax and conventional web requests are all feasible at the conceptual level of your question. Add in some monitoring code and compare the results of load tests at various levels of scale. In addition to performance and cost, the instrumentation code should also monitor reliability.
I highly doubt your app will get 40 million users a day, and doubt even more that each of those will click Like ten times a day.
But I don't understand why clicking Like would result in a lot of data transfer. It's a simple Ajax request, which wouldn't even need to return anything than an empty response, with a 200 code for success and a 400 for failure.
I would experiment with different options on a small scale first, to get some data from which you can extrapolate to calculate your costs. However, a simple ajax request with a lightweight handler is likely to be cheaper than Channel API.
If you are getting 40m daily users, reading at least 100 articles, then making 100 likes, I'm guessing you will have something exceeding 8bn requests a day. On that basis, your instance costs are likely to be significant before even considering a like button. At that volume of requests, how you handle the request on the server side will be important in managing your costs.
Using tools like AppStats, Chrome Dev Tools and Apache Jmeter will help you get a better view on your response times, instance & bandwidth costs and user experience before you scale up.