Business Profile Performance: values are differente than GBP App - google-api

I have a gap in values between the one returned by Google Business Profile Performance API and the one in Google Business Profile application.
For example, on 14th July 2022, Google Business Profile Performance API give me a value to 28 for the BUSINESS_DIRECTION_REQUESTS metric.
The request parameters:
{
"dailyMetric": "BUSINESS_DIRECTION_REQUESTS",
"dailyRange.startDate.day": 20,
"dailyRange.startDate.month": 7,
"dailyRange.startDate.year": 2021,
"dailyRange.endDate.day": 17,
"dailyRange.endDate.month": 7,
"dailyRange.endDate.year": 2022,
"name": "locations/10[...]19"
}
The response on 14th July:
{
"date": {
"year": 2022,
"month": 7,
"day": 14
},
"value": "28"
}
For the same day, on Google Business Profile application, the graph of customer actions give me 40 itinirary requests:
How to explain this gap between values ?

I had the same issue where BUSINESS_DIRECTION_REQUESTS were much lower than their old, supposedly equivalent metric actions_driving_directions. I contacted Google Profile and this was their answer:
“Going forward with the new Performance API multiple impressions by a unique user within a single day are counted as a single impression.”

Related

Spring Boot Actuator 'http.server.requests' metric MAX time

I have a Spring Boot application and I am using Spring Boot Actuator and Micrometer in order to track metrics about my application. I am specifically concerned about the 'http.server.requests' metric and the MAX statistic:
{
"name": "http.server.requests",
"measurements": [
{
"statistic": "COUNT",
"value": 2
},
{
"statistic": "TOTAL_TIME",
"value": 0.079653001
},
{
"statistic": "MAX",
"value": 0.032696019
}
],
"availableTags": [
{
"tag": "exception",
"values": [
"None"
]
},
{
"tag": "method",
"values": [
"GET"
]
},
{
"tag": "status",
"values": [
"200",
"400"
]
}
]
}
I suppose the MAX statistic is the maximum time of execution of a request (since I have made two requests, it's the the time of the longer processing of one of them).
Whenever I filter the metric by any tag, like localhost:9090/actuator/metrics?tag=status:200
{
"name": "http.server.requests",
"measurements": [
{
"statistic": "COUNT",
"value": 1
},
{
"statistic": "TOTAL_TIME",
"value": 0.029653001
},
{
"statistic": "MAX",
"value": 0.0
}
],
"availableTags": [
{
"tag": "exception",
"values": [
"None"
]
},
{
"tag": "method",
"values": [
"GET"
]
}
]
}
I am always getting 0.0 as a max time. What is the reason of this?
What does MAX represent (MAX Discussion)
MAX represents the maximum time taken to execute endpoint.
Analysis for /user/asset/getAllAssets
COUNT TOTAL_TIME MAX
5 115 17
6 122 17 (Execution Time = 122 - 115 = 17)
7 131 17 (Execution Time = 131 - 122 = 17)
8 187 56 (Execution Time = 187 - 131 = 56)
9 204 56 From Now MAX will be 56 (Execution Time = 204 - 187 = 17)
Will MAX be 0 if we have less number of request (or 1 request) to the particular endpoint?
No number of request for particular endPoint does not affect the MAX (see an image from Spring Boot Admin)
When MAX will be 0
There is Timer which set the value 0. When the endpoint is not being called or executed for sometime Timer sets MAX to 0. Here approximate timer value is 2 to 2.30 minutes (120 to 150 seconds)
DistributionStatisticConfig has .expiry(Duration.ofMinutes(2)) which sets the some measutement to 0 if there is no request has been made for last 2 minutes (120 seconds)
Methods such as public TimeWindowMax(Clock clock,...), private void rotate() Clock interface has been written for the same. You may see the implementation here
How I have determined the timer value?
For that, I have taken 6 samples (executed the same endpoint for 6 times). For that, I have determined the time difference between the time of calling the endpoint - time for when MAX set back to zero
MAX property belongs to enum Statistic which is used by Measurement
(In Measurement we get COUNT, TOTAL_TIME, MAX)
public static final Statistic MAX
The maximum amount recorded. When this represents a time, it is
reported in the monitoring system's base unit of time.
Notes:
This is the cases from metric for a particular endpoint (here /actuator/metrics/http.server.requests?tag=uri:/user/asset/getAllAssets).
For generalize metric of actuator/metrics/http.server.requests
MAX for some endPoint will be set backed to 0 due to a timer. In my view for MAX for /http.server.requests will be same as a particular endpoint.
UPDATE
The document has been updated for the MAX.
NOTE: Max for basic DistributionSummary implementations such as
CumulativeDistributionSummary, StepDistributionSummary is a time
window max (TimeWindowMax). It means that its value is the maximum
value during a time window. If the time window ends, it'll be reset to
0 and a new time window starts again. Time window size will be the
step size of the meter registry unless expiry in
DistributionStatisticConfig is set to other value explicitly.
You can see the individual metrics by using ?tag=url:{endpoint_tag} as defined in the response of the root /actuator/metrics/http.server.requests call. The details of the measurements values are;
COUNT: Rate per second for calls.
TOTAL_TIME: The sum of the times recorded. Reported in the monitoring system's base unit of time
MAX: The maximum amount recorded. When this represents a time, it is reported in the monitoring system's base unit of time.
As given here, also here.
The discrepancies you are seeing is due to the presence of a timer. Meaning after some time currently defined MAX value for any tagged metric can be reset back to 0. Can you add some new calls to your endpoint then immediately do a call to /actuator/metrics/http.server.requests to see a non-zero MAX value for given tag?
This is due to the idea behind getting MAX metric for each smaller period. When you are seeing these metrics, you will be able to get an array of MAX values rather than a single value for a long period of time.
You can get to see this in action within Micrometer source code. There is a rotate() method focused on resetting the MAX value to create above described behaviour.
You can see this is called for every poll() call, which is triggered every some period for metric gathering.

Is it possible to ask free form questions using google knowledge graph api?

Is it possible to ask a question like "how tall is the Eiffel Tower?" using google knowledge graph api? If not what is the correct api to use?
when i try this:
https://kgsearch.googleapis.com/v1/entities:search?query=how+tall+is+eiffel+tower&key=my_key&limit=1&indent=True
I get and empty result.
It's possible to ask, but half of the time it will "answer" with something different than what you were asking and the rest of the time it will give you an empty result.
Even unambiguous searches usually return empty or unexpected results. For example, when I search for the current US President it returns a result about Barack Obama, and when I search for the US population it doesn't really say what it should say (318.9 million (2014)):
=> #<HTTParty::Response:0x7ffc5857b938 parsed_response={"#context"=>{"#vocab"=>"http://schema.org/", "goog"=>"http://schema.googleapis.com/", "EntitySearchResult"=>"goog:EntitySearchResult", "detailedDescription"=>"goog:detailedDescription", "resultScore"=>"goog:resultScore", "kg"=>"http://g.co/kg"}, "#type"=>"ItemList", "itemListElement"=>[{"#type"=>"EntitySearchResult", "result"=>{"#id"=>"kg:/m/09c7w0", "name"=>"United States", "#type"=>["Country", "Thing", "Place", "AdministrativeArea"], "description"=>"Country", "image"=>{"contentUrl"=>"http://t1.gstatic.com/images?q=tbn:ANd9GcQKp8mjZhEK0hZroCA4srP9VA9eD8-0PcCsKSU4olhQlh6dMlxc", "url"=>"https://commons.wikimedia.org/wiki/File:USA_Flag_Map.svg", "license"=>"http://creativecommons.org/licenses/by-sa/2.5"}, "detailedDescription"=>{"articleBody"=>"The United States of America, commonly referred to as the United States or America, is a federal republic composed of 50 states, a federal district, five major self-governing territories, and various possessions. ", "url"=>"https://en.wikipedia.org/wiki/United_States", "license"=>"https://en.wikipedia.org/wiki/Wikipedia:Text_of_Creative_Commons_Attribution-ShareAlike_3.0_Unported_License"}, "url"=>"http://www.usa.gov/"}, "resultScore"=>246.96698}, {"#type"=>"EntitySearchResult", "result"=>{"#id"=>"kg:/g/1q5jrvck9", "name"=>"Population: Us", "#type"=>["Thing"], "description"=>"Song by Frank Portman"}, "resultScore"=>20.875225}]}, #response=#<Net::HTTPOK 200 OK readbody=true>, #headers={"content-type"=>["application/json; charset=UTF-8"], "vary"=>["Origin", "X-Origin", "Referer"], "date"=>["Fri, 03 Feb 2017 20:33:38 GMT"], "server"=>["ESF"], "cache-control"=>["private"], "x-xss-protection"=>["1; mode=block"], "x-frame-options"=>["SAMEORIGIN"], "x-content-type-options"=>["nosniff"], "alt-svc"=>["quic=\":443\"; ma=2592000; v=\"35,34\""], "connection"=>["close"], "transfer-encoding"=>["chunked"]}>
Regardless of how I phrase the query or what keywords I use, it is practically useless. I have also tried specifying the &types= of results I want. Rarely does it ever return expected results; for example when I search for United States of America:
=> #<HTTParty::Response:0x7ffc58619f20 parsed_response={"#context"=>{"#vocab"=>"http://schema.org/", "goog"=>"http://schema.googleapis.com/", "EntitySearchResult"=>"goog:EntitySearchResult", "detailedDescription"=>"goog:detailedDescription", "resultScore"=>"goog:resultScore", "kg"=>"http://g.co/kg"}, "#type"=>"ItemList", "itemListElement"=>[{"#type"=>"EntitySearchResult", "result"=>{"#id"=>"kg:/m/09c7w0", "name"=>"United States", "#type"=>["Country", "Thing", "Place", "AdministrativeArea"], "description"=>"Country", "image"=>{"contentUrl"=>"http://t1.gstatic.com/images?q=tbn:ANd9GcQKp8mjZhEK0hZroCA4srP9VA9eD8-0PcCsKSU4olhQlh6dMlxc", "url"=>"https://commons.wikimedia.org/wiki/File:USA_Flag_Map.svg", "license"=>"http://creativecommons.org/licenses/by-sa/2.5"}, "detailedDescription"=>{"articleBody"=>"The United States of America, commonly referred to as the United States or America, is a federal republic composed of 50 states, a federal district, five major self-governing territories, and various possessions. ", "url"=>"https://en.wikipedia.org/wiki/United_States", "license"=>"https://en.wikipedia.org/wiki/Wikipedia:Text_of_Creative_Commons_Attribution-ShareAlike_3.0_Unported_License"}, "url"=>"http://www.usa.gov/"}, "resultScore"=>4238.782227}]}, #response=#<Net::HTTPOK 200 OK readbody=true>, #headers={"content-type"=>["application/json; charset=UTF-8"], "vary"=>["Origin", "X-Origin", "Referer"], "date"=>["Fri, 03 Feb 2017 20:29:07 GMT"], "server"=>["ESF"], "cache-control"=>["private"], "x-xss-protection"=>["1; mode=block"], "x-frame-options"=>["SAMEORIGIN"], "x-content-type-options"=>["nosniff"], "alt-svc"=>["quic=\":443\"; ma=2592000; v=\"35,34\""], "connection"=>["close"], "transfer-encoding"=>["chunked"]}>
I'd recommend not to waste your time with it as I already did. Also note that the Custom Search API does not include results from the Knowledge Graph, and there "non-custom" search API has long been deprecated.

Elasticsearch how to index an existing json file

I use a PUT command:
curl -PUT "http://localhost:9200/music/lyrics/2" --data-binary #D:\simple\caseyjones.json
caseyjones.json:
{
"artist": "Wallace Saunders",
"year": 1909,
"styles": ["traditional"],
"album": "Unknown",
"name": "Ballad of Casey Jones",
"lyrics": "Come all you rounders if you want to hear
The story of a brave engineer
Casey Jones was the rounder's name....
Come all you rounders if you want to hear
The story of a brave engineer
Casey Jones was the rounder's name
On the six-eight wheeler, boys, he won his fame
The caller called Casey at half past four
He kissed his wife at the station door
He mounted to the cabin with the orders in his hand
And he took his farewell trip to that promis'd land
Chorus:
Casey Jones--mounted to his cabin
Casey Jones--with his orders in his hand
Casey Jones--mounted to his cabin
And he took his... land"
}
Warning: failed to parse, document is empty. But log has show contents of *.json file.
JSON does not allow line-breaks. So, you need to replace all the line breaks with \n (platform specific) and store the text as a single line.
Like:
{
"artist": "Wallace Saunders",
"year": 1909,
"styles": ["traditional"],
"album": "Unknown",
"name": "Ballad of Casey Jones",
"lyrics": "Come all you rounders if you want to hear\nThe story of a brave engineer\nCasey Jones was the rounder's name...."
}

Set up all-day calendar events with Outlook 365 API on different time zones

I am trying to create one-day, all-day events with the Outlook 365 API. To do so, I specify the Start and End time in UTC format, and then indicate the StartTimeZone and EndTimeZone as described by the documentation:
{
Start: '2015-07-14T23:00:00.000Z',
End: '2015-07-15T23:00:00.000Z',
StartTimeZone: 'W. Central Africa Standard Time',
EndTimeZone: 'W. Central Africa Standard Time',
ShowAs: 'Free',
IsAllDay: true,
Body: {
ContentType: 'HTML',
Content: 'To-do due date'
},
Subject : 'test 74'
}
Now, here are my problems:
Using a string to define the timezone is inconvenient and inconsistent. London uses GMT during winter, but GMT+1 during summer. As a result I must use 'W. Central Africa Standard Time' during summer to have my request accepted by the API, which is confusing. Defining the time merely with this format, with no mention of a timezone: 2015-07-14T00:00:00+/-XX:00 to describe midnight, the beginning of July 14th in a zone where the time is GMT+/-XX:00 would be unequivocal and ideal to me, but this date format is rejected by the API with the option IsAllDay: true, stating that an all-day event should start and end at midnight (if no StartTimeZone and EndTimeZone is given).
Some time zone simply do not work with the GMT offset given in the doc, even when the time and the zone correspond. Those zone simply do not work with their GMT offset: Alaskan Standard Time, Pacific Standard Time, Mid-Atlantic Standard Time. Here is a query example that fails ('Mid-Atlantic Standard Time', GMT offset of -2):
{
Start: '2015-07-15T02:00:00.000Z',
End: '2015-07-16T02:00:00.000Z',
StartTimeZone: 'Mid-Atlantic Standard Time',
EndTimeZone: 'Mid-Atlantic Standard Time',
ShowAs: 'Free',
IsAllDay: true,
Body: {
ContentType: 'HTML',
Content: 'To-do due date'
},
Subject : 'test 75'
}
It works with a GMT offset of -1 (Start: '2015-07-15T01:00:00.000'). How do I do for my users in GMT-2 ?
The same GMT offset can be described by several strings in the doc. For instance, 'Mountain Standard Time' and 'US Mountain Standard Time' both describe GMT-07:00, yet only my queries with 'US Mountain Standard Time' work. Some time offset can have up to 5 different strings! (like GMT+01:00) Which one to chose?
For now, I am choosing a TimeZone string that works (if one exists!) based on the GMT offset. If I set my clock at London current time, I will use 'W. Central Africa Standard Time' for StartTimeZone and EndTimeZone.
Is there a way NOT to use those strings? Or can someone explain me how to choose them right? I am completely lost in date translation! :)
Getting all-day events right really does require knowing the user's time zone. Unfortunately if you create it in a random time zone, and the user mail client (Outlook, OWA, etc) uses another, the event will show up spanning multiple days (since the start and end get shifted from midnight).
So what you really should do here is set the start and end times with midnight in the user's time zone:
{
"Start": "2015-07-17T00:00:00-04:00",
"End": "2015-07-18T00:00:00-04:00",
"StartTimeZone": "Eastern Standard Time",
"EndTimeZone": "Eastern Standard Time",
"IsAllDay": "true",
"ShowAs": "Free",
"Body": {
"ContentType": "Text",
"Content": "Test"
},
"Subject": "TZ AllDay Test"
}
Where would the event be used? Use that local time zone.

Algorithm: Creating an outage table

I created this device which sends a point to my webserver. My web server stores a Point instance which has the attributes created_at to reflect the point's creation time. My device consistently sends a request to my server at a 180s interval.
Now I want to see the periods of time my device has experienced outages in the last 7 days.
As an example, let's pretend it's August 3rt (08/03). I can query my Points table for points up to the last 3 days sorted by created_at
Points = [ point(name=p1, created_at="08/01 00:00:00"),
point(name=p2, created_at="08/01 00:03:00"),
point(name=p3, created_at="08/01 00:06:00"),
point(name=p4, created_at="08/01 00:20:00"),
point(name=p5, created_at="08/03 00:01:00"),
... ]
I would like to write an algorithm that can list out the following outages:
outages = {
"08/01": [ "00:06:00-00:20:00", "00:20:00-23:59:59" ],
"08/02": [ "00:00:00-23:59:59" ],
"08/03": [ "00:00:00-00:01:00" ],
}
Is there an elegant way to do this?

Resources