Context: We are using this API for testing performance of selective pages https://developers.google.com/speed/docs/insights/v5/reference/pagespeedapi/runpagespeed
Problem statement: Till 18th Aug 22, the values in response code, used to be in similar ranges when user is manually checking scores at https://pagespeed.web.dev/. However, post 18th Aug, the some of the values have dropped significantly while Performance_score which used to be in decimals has started giving a response code as 1 - indicating 100 performance for mobile strategy - which we believe to be a data error.
Screenshot 1: Depicting score being reported as 1 (Being decimal, indicates as 100 score in API explorer)
enter image description here
Screenshot 2: Depicting score being reported as 61 when same URL is hit manually at https://pagespeed.web.dev/
enter image description here
I'm working on Elasticsearch where I have to apply the idea of bucketing based on lastActiveDate of a user.
current Process is below
(Search String) -> [ElasticSearch(match query + range query)] => (Response( user with score)) => Then apply Pagination
What I need to implement is
(Search String) -> [ElasticSearch(match query + range query)] => (Response( Keep the relevancy score as the last) + [lastActiveDate should be in range of 1-3 months from current date] +
This is new Addition Required
(Append this respose as well)Response( Keep the relevancy score as is it) + [lastActiveDate should be in range of 4-6 months from current date]) => Then apply Pagination
Now, this can be solved if I tell the API consumer to call my API multiple time with appropriate date range(Solving the issue at Client(consumer) level). But as I am working with micro-services this logic should be implemented at my end so that other teams don't have to change the code.
My issue is with pagination and data duplication if I implement anything.
Solutions that I thought might work:
Apply multi search query on same index with, but in response dont get response object I get multiple response object with their own pagination objects. (Not sure how do I manipulate this for my implementation as pagination is not global)
Make the REST API statefull, send the pagination object(somesortofhash) from which I can tell do I have to query for 1-3 months or 4-6 months. I can do this otherwise whats the point of REST API.
The Known problems.
I have two section (1-3, 4-6), during pagination if the elements are less than the asked size in pagination. They won't call my api again as it means the data is not there anymore. To continue the response I have to call my database again for 4-6 months and full fill my request size.
If I go with above approach, I'll have a data duplication issue, as in the next call(4-6) I'll send the same data again, for mitigting this I can send an object in the response and tell others to send this when you call the API again. This will again create the issue because I'm dependent on the previous call.
It seems I'm out of options here, can anyone help how to do this ?
Please could any of you help me / give suggestions on how I can achieve this. A matrix (10 rows and 12 columns) of entries run on to several pages (page by page with a link to the next page). I need to select the entries and make payment for every run. It is not a good idea to create samplers page by page so I am trying to achieve below:
{
1. If entries found >= 20 on the first page:
a. HTTP POST
b. Go to step-4
2. If entries < 20 AND Next page (link) exists:
a. Click Next Page link (HTTP POST)
b. Go to Step-1
3. If entries < 20 AND Next page does not exist:
a. Print a message
4. Payment Page
}
The JMeter components you will need are:
If Controller - for choosing next request depending on the number of results
Module Controller - as the target for the If Controller
More information: Easily Write a GOTO Statement in JMeter
I didn't find a 'moving average' feature and I'm wondering if there's a workaround.
I'm using influxdb as the backend.
Grafana supports adding a movingAverage(). I also had a hard time finding it in the docs, but you can (somewhat hilariously) see its usage on the feature intro page:
As is normal, click on the graph title, edit, add the metric movingAverage() as per described in the graphite documentation:
movingAverage(seriesList, windowSize)
Graphs the moving average of a metric (or metrics) over a fixed number of past points, or a time interval.
Takes one metric or a wildcard seriesList followed by a number N of datapoints or a quoted string with a length of time like ‘1hour’ or ‘5min’ (See from / until in the render_api_ for examples of time formats). Graphs the average of the preceding datapoints for each point on the graph. All previous datapoints are set to None at the beginning of the graph.
Example:
&target=movingAverage(Server.instance01.threads.busy,10)
&target=movingAverage(Server.instance*.threads.idle,'5min')
Grafana does no calculations itself, it just queries a backend and draws nice charts. So aggregating abilities depends solely on your backend. While Graphite supports windowing functions such as moving average, InfluxDB currently doesn't support it.
There are quite a lot requests for moving average in influxdb on the web. You can leave your "+1" and track progress in this ticket https://github.com/influxdb/influxdb/issues/77
Possible (yet not so easy) workaround is to create a custom script (cron, daemon, whatever) that will pre-calcuate MA and save it in a separate influxdb series.
I found myself here trying to do a moving average in Grafana with a PostgreSQL database, so I'll just add a way to do with a SQL query:
SELECT
date as time,
AVG(daily_average_column)
OVER(ORDER BY date ROWS BETWEEN 4 PRECEDING AND CURRENT ROW)
AS value,
'5 Day Moving Average' as metric
FROM daily_average_table
ORDER BY time ASC;
This uses a "window" function to average of the last 4 rows (plus the current row).
I'm sure there are ways to do this with MySQL as well.
Method and capability for this is dependent on your datasource.
You specified InfluxDB, so your query will need to wrap an 'Aggregation function' [ such as mean($field) ] within the moving_average($aggregation_function, $num_of_points) 'Transformation Function'.
In the 'Metrics' tab, you will find both the 'Transformation' functions in the 'select' portion of the menu.
Craft your query with the 'Aggregation function' (mean, min, max, etc.) first -- this way you can make sure the data looks as you expect it.
After this, just click the '+' button next to the 'Aggregation function', and under the menu 'Transformations', select 'moving_average'.
The number in brackets will be the number of points you want the average taken over.
Screenshot:
try avg_over_time(mymetric[5m])
InfluxDB 2 allows you to calculate the moving average in the query, e.g.:
from(bucket: "iot")
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) => r["_measurement"] == "PoolWeather")
|> filter(fn: (r) => r["_field"] == "batteryvoltage")
|> aggregateWindow(every: v.windowPeriod, fn: mean, createEmpty: false)
|> movingAverage(n: 10)
|> yield(name: "average")
Another option is to report the data as "timing" metrics and not counts.
This is easy to do especially with Statsd in your stack.
Plotting timing data (coming from statsd) as average of the reported data points is already built in.
I have following problem with tracking of Magento purchase on Google Analytics (custom theme, different from default checkout process).
My goal settings are following: http://db.tt/W30D0CnL, where step 3 equals to /checkout/onepage/opc-review-placeOrderClicked
As you can see from funnel visualization ( http://db.tt/moluI29d ) after step 2 (Checkout Start) there are a lot of exits toward /checkout/onepage/opc-review-placeOrderClicked which is setted as step 3, but step 3 reporting always 0.
Is there something that I'm missing here?
I've found the problem. Apparently second point (/checkout/onepage) was fired even on the third step.
When I changed it to regex match (/checkout/onepage$) everything started to work.