Weird result using group by - sorting

I help run a small radio station and we log stats in to a MySQL database.
I have started seeing weird results when using the following code to grab peaks for each entry logged in the stats table.
I have tried a few things in the code but not seen any change in results so far.
SELECT *
FROM
(
SELECT
stat_utc,
stat_master_count,
stat_master_track
FROM
stats_all_master
ORDER BY
stat_master_count DESC,
stat_utc DESC
) AS my_table_tmp
GROUP BY
stat_master_track
ORDER BY
stat_master_count DESC,
stat_utc DESC
LIMIT 5;
If I remove the GROUP BY stat_master_track I see results with higher stat_count.
With GROUP BY:
41 LIVE SHOW ~ Erebuss - Mixset Showcase Mix 2018 (29/12/18)
41 Mixset ~ Inna Rhythm Recordings - Mix 01 (2015)
39 LIVE SHOW ~ DJ Ransome - Mixset Showcase 2017
38 LIVE SHOW ~ Parts Unknown - PartsUnknown #007
37 LIVE SHOW ~ MrKrotos - Mixset Showcase 2018 Part 2 (2018-12-29)
Without GROUP BY I see records with higher count like:
51 LIVE SHOW ~ DnB_Bo - Mixset Showcase Mix 2018 Part 1 (29/12/18)
47 LIVE SHOW ~ Erebuss - Renegade sessions #0006 (UK)
46 LIVE SHOW ~ DnB_Bo - Mixset Showcase Mix 2018 Part 1 (29/12/18)

Related

API Response code for Page Speed Insights is presenting absurd numbers

Context: We are using this API for testing performance of selective pages https://developers.google.com/speed/docs/insights/v5/reference/pagespeedapi/runpagespeed
Problem statement: Till 18th Aug 22, the values in response code, used to be in similar ranges when user is manually checking scores at https://pagespeed.web.dev/. However, post 18th Aug, the some of the values have dropped significantly while Performance_score which used to be in decimals has started giving a response code as 1 - indicating 100 performance for mobile strategy - which we believe to be a data error.
Screenshot 1: Depicting score being reported as 1 (Being decimal, indicates as 100 score in API explorer)
enter image description here
Screenshot 2: Depicting score being reported as 61 when same URL is hit manually at https://pagespeed.web.dev/
enter image description here

Prometheus: Prevent starting new time series on label change

Assuming the following metric:
cpu_count{machine="srv1", owner="Alice", department="ops"} 8
cpu_count{machine="srv1", owner="Bob", department="ops"} 8
I'd like to be able to prevent starting a new time series on owner change. It should still be considered the same instance, but I would like to be able to look up by owner.
I don't particularly care if it matches only on my_metric{owner=~"Box"} or on both my_metric{owner=~"Box"} and my_metric{owner=~"Alice"}, I just need to make sure it does not count twice on my_metric{machine=~"srv1"} or my_metric{department=~"ops"}.
I'm willing to accept that using labels to group instances in this manner is not the correct approach, but what is?
When you add the label "owner" to this kind of metric I think you're trying to accomplish a kind of "asset management" which could be done better with some other tool developed specific to this goal. Prometheus isn't a suitable tool to keep the information of who is using each machine in your company.
Said that, every time the owner of a machine changes you could workaround this issue deleting the old data series using the REST API executing something like this:
curl --silent --user USER:PASS --globoff --request POST "https://PROMETHEUS-SERVER/api/v1/admin/tsdb/delete_series?match[]={machine='srv1',owner='Bob'}"
If you can change the code, it would be better to have a metric dedicated to the ownership:
# all metrics are identified a usual
cpu_count{machine="srv1", department="ops"} 8
# use an info metrics to give details about owner
machine_info{machine="srv1", owner="Alice", department="ops"} 1
You can still aggregate the information id you need it:
cpu_count * ON(machine,department) machine_info
That way, the owner is not polluting all your metrics. Still, you will have issues when changing the owner of a machine while waiting for the older metric to disappear (5 minutes before staleness).
I have not tried it but a solution could be to use the time at which the ownership changed (if you can provide it) as a metric value - epoch time in seconds.
# owner changed at Sun, 08 Mar 2020 22:05:53 GMT
machine_info{machine="srv1", owner="Alice", department="ops"} 1583705153
# Previous owner Sat, 01 Feb 2020 00:00:00 GMT
machine_info{machine="srv1", owner="Alice", department="ops"} 1580515200
And then use the following expression to get the latest owner whenever you need the current owner - only useful when owner has changed within the last 5 minutes:
machine_info == ON(machine,department) BOOL (max(machine_info) BY(machine,department) )
Quite a mouthful but it would approach what you want.

How to select timeframe from https://fullcalendar.io/docs/date-clicking-selecting-demo week view?

I am trying to select a timeframe from https://fullcalendar.io/docs/date-clicking-selecting-demo calendar week view by using the robot framework.
I have tried different xpath-selections with no luck. Below is the latest one. Problem is that it selects always the same column even I change the value of the th[4].
Mouse Down xpath://thead/tr/td/div/table/thead/tr/th[4]//ancestor::table/tbody//tr[13]//td[2]
Mouse Up xpath://thead/tr/td/div/table/thead/tr/th[4]//ancestor::table/tbody//tr[32]//td[2]
Is there any way to select timeframe on the specific day?
You will need to find the matching xpath, by looking at the "data-date" Here is a working example, You can change the date accordingly to which one you want to click
Click calendar dates
Open browser https://fullcalendar.io/docs/date-clicking-selecting-demo chrome
click element xpath=//*[#id="calendar"]/div[2]/div/table/tbody/tr/td/div/div/div[5]/div[1]/table/tbody/tr/td[contains(#data-date, '2019-06-26')]
Alert Should be present
Results
Click calendar dates | PASS |
------------------------------------------------------------------------------
Tests.Create Program | PASS |
1 critical test, 1 passed, 0 failed
1 test total, 1 passed, 0 failed
==============================================================================
Tests | PASS |
1 critical test, 1 passed, 0 failed
1 test total, 1 passed, 0 failed
==============================================================================

load testing results now serves lesser users

I was testing website with 30 users/second and it was working fine. But now it is not even serving 25 users/second. The website is a search engine kind of site. In between these two tests of 30 users/second and 25 users/seconds, we have started the crawler to get some sites crawled and then again stopped it before the load testing. 30 users/second was working fine before crawler turned on. Elastic search is used as DB for the website and it went down saying no nodes available.
I've used standard thread group. Below is the configuration.
Total Samples: 250
Ramp up (sec) : 10
Loop count : 1
When I checked the results in table, it shows all green signal but when we hit the site, it gives nonodesavailable exception

Performance degradation using Azure CDN?

I have experimented quite a bit with CDN from Azure, and I thought i was home safe after a successful setup using a web-role.
Why the web-role?
Well, I wanted the benefits of compression and caching headers which I was unsuccessful obtaining using normal blob way. And as an added bonus; the case-sensitive constrain was eliminated also.
Enough with the choice of CDN serving; while all content before was served from the same domain, I now serve more or less all "static" content from cdn.cuemon.net. In theory, this should improve performance since browsers parallel can spread content gathering over "multiple" domains compared to one domain only.
Unfortunately this has lead to a decrease in performance which I believe has to do with number of hobs before content is being served (using a tracert command):
C:\Windows\system32>tracert -d cdn.cuemon.net
Tracing route to az162766.vo.msecnd.net [94.245.68.160]
over a maximum of 30 hops:
1 1 ms 1 ms 1 ms 192.168.1.1
2 21 ms 21 ms 21 ms 87.59.99.217
3 30 ms 30 ms 31 ms 62.95.54.124
4 30 ms 29 ms 29 ms 194.68.128.181
5 30 ms 30 ms 30 ms 207.46.42.44
6 83 ms 61 ms 59 ms 207.46.42.7
7 65 ms 65 ms 64 ms 207.46.42.13
8 65 ms 67 ms 74 ms 213.199.152.186
9 65 ms 65 ms 64 ms 94.245.68.160
C:\Windows\system32>tracert cdn.cuemon.net
Tracing route to az162766.vo.msecnd.net [94.245.68.160]
over a maximum of 30 hops:
1 1 ms 1 ms 1 ms 192.168.1.1
2 21 ms 22 ms 20 ms ge-1-1-0-1104.hlgnqu1.dk.ip.tdc.net [87.59.99.217]
3 29 ms 30 ms 30 ms ae1.tg4-peer1.sto.se.ip.tdc.net [62.95.54.124]
4 30 ms 30 ms 29 ms netnod-ix-ge-b-sth-1500.microsoft.com [194.68.128.181]
5 45 ms 45 ms 46 ms ge-3-0-0-0.ams-64cb-1a.ntwk.msn.net [207.46.42.10]
6 87 ms 59 ms 59 ms xe-3-2-0-0.fra-96cbe-1a.ntwk.msn.net [207.46.42.50]
7 68 ms 65 ms 65 ms xe-0-1-0-0.zrh-96cbe-1b.ntwk.msn.net [207.46.42.13]
8 65 ms 70 ms 74 ms 10gigabitethernet5-1.zrh-xmx-edgcom-1b.ntwk.msn.net [213.199.152.186]
9 65 ms 65 ms 65 ms cds29.zrh9.msecn.net [94.245.68.160]
As you can see from the above trace route, all external content is delayed for quite some time.
It is worth noticing, that the Azure service is setup in North Europe and I am settled in Denmark, why this trace route is a bit .. hmm .. over the top?
Another issue might be that the web-role is two extra small instances; I have not found the time yet to try with two small instances, but I know that Microsoft limits the extra small instances to a 5Mb/s WAN where small and above has 100Mb/s.
I am just unsure if this goes for CDN as well.
Anyway - any help and/or explanation is greatly appreciated.
And let me state, that I am very satisfied with the Azure platform - I am just curious in regards to the above mentioned matters.
Update
New tracert without the -d option.
Being inspired by user728584 I have researched and found this article, http://blogs.msdn.com/b/scicoria/archive/2011/03/11/taking-advantage-of-windows-azure-cdn-and-dynamic-pages-in-asp-net-caching-content-from-hosted-services.aspx, which I will investigate further in regards to public cache-control and CDN.
This does not explain the excessive hops count phenomenon, but I hope a skilled network professional can help in casting light to this matter.
Rest assured, that I will keep you posted according to my findings.
Not to state the obvious but I assume you have set the Cache-Control HTTP header to a large number so as your content is not being removed from the CDN Cache and being served from Blob Storage when you did your tracert tests?
There are quite a few edge servers near you so I would expect it to perform better: 'Windows Azure CDN Node Locations' http://msdn.microsoft.com/en-us/library/windowsazure/gg680302.aspx
Maarten Balliauw has a great article on usage and use cases for the CDN (this might help?): http://acloudyplace.com/2012/04/using-the-windows-azure-content-delivery-network/
Not sure if that helps at all, interesting...
Okay, after I'd implemented public caching-control headers, the CDN appears to do what is expected; delivering content from x-number of nodes in the CDN cluster.
The above has the constrain that it is experienced - it is not measured for a concrete validation.
However, this link support my theory: http://msdn.microsoft.com/en-us/wazplatformtrainingcourse_windowsazurecdn_topic3,
The time-to-live (TTL) setting for a blob controls for how long a CDN edge server returns a copy of the cached resource before requesting a fresh copy from its source in blob storage. Once this period expires, a new request will force the CDN server to retrieve the resource again from the original blob, at which point it will cache it again.
Which was my assumed challenge; the CDN referenced resources kept pooling the original blob.
Also, credits must be given to this link also (given by user728584); http://blogs.msdn.com/b/scicoria/archive/2011/03/11/taking-advantage-of-windows-azure-cdn-and-dynamic-pages-in-asp-net-caching-content-from-hosted-services.aspx.
And the final link for now: http://blogs.msdn.com/b/windowsazure/archive/2011/03/18/best-practices-for-the-windows-azure-content-delivery-network.aspx
For ASP.NET pages, the default behavior is to set cache control to private. In this case, the Windows Azure CDN will not cache this content. To override this behavior, use the Response object to change the default cache control settings.
So my conclusion so far for this little puzzle is that you must pay a close attention to your cache-control (which often is set to private for obvious reasons). If you skip the web-role approach, the TTL is per default 72 hours, why you may not never experience what i experienced; hence it will just work out-of-the-box.
Thanks to user728584 for pointing me in the right direction.

Resources