update frequency of recaptcha analytics - recaptcha

We recently got reCAPTCHA on one of our sites.
Im getting asked to pull out the data, which is pretty simple and is all fine in CSV format and everything.
My question(s) is when do the data update? It seems like its every 24hrs, but im not really sure.
Also is there a way to get more data than the "Date,no CAPTCHAs,Passed CAPTCHAs,Failed CAPTCHAs,Total Sessions,Failed Sessions,Average Response Time (seconds),Average score"?
Thanks in advance
Reading the analytics FAQ

If you are using reCAPTCHA Enterprise you can use the GetMetrics gRPC endpoint or REST endpoint. This will return metrics about both Scores and Challenges (CAPTCHAs).
{
"name": string,
"startTime": string,
"scoreMetrics": [
{
object (ScoreMetrics)
}
],
"challengeMetrics": [
{
object (ChallengeMetrics)
}
]
}
These metrics should be approximately real-time, that values should change within ~minutes of events occurring.

Related

Display the date of the last value for a custom metric in Datadog

I'm searching to display in a DD Dashboard the date of the last value of a metric. I can use a simple QueryValue field for that, however I don't know how I can get the date of this metric instead of its value.
Example :
mymetric = [21,5,42]
let's say that the relevant dates for those metrics are
['1/1/2022', '1/2/2022', '27/2/2022']
I'd like to display 27/2/2022.
For now, I'm displaying only the value with this query
"queries": [
{
"query": "max:mycounter{$env}",
"data_source": "metrics",
"name": "query1",
"aggregator": "last"
}
]
Is it possible with Datadog ?
In fact I've found a workaround, but that seems a little bit unoptimized. I push also from my code mycounter.date_day mycounter.date_month and mycounter.date_year. And then I can display day month year in 3 different QueryValue.
Regards,
Blured.
This is not a use case Datadog metrics are designed for. Any solution you find is going to be pretty hacky.
This seems like maybe instead of a custom metric, you'd want to be sending a log or event, with date/time attributes that you can display. Even that doesn't really sound close to what you're looking for though.
If you really wanted to get this working in a clean way, I suppose you could try developing your own custom Datadog widget as an app: https://www.datadoghq.com/blog/datadog-apps/ (reference docs)
But whatever your goal is might just have to be redesigned to make more sense for what Datadog is built for.

Elasticsearch - List all sources sending messages to ES

I am trying to get a list which shows me all sources ES is receiving messages from. I am pretty new with this topic and trying to get deeper into it. I am searching basically for a solution to see the total amount of sources sending logs to my central logging solution and in best case also provided my a list with the source names.
Does anyone have an idea how to get such information querying Elasticsearch?
Yes, this is possible, though the solution depends on how your data looks.
Users typically index data in Elasticsearch so that it contains more than just the raw log lines. This is done automatically if you're using Filebeat. Otherwise, you'd do something (add a field using Logstash, rely on a host field in syslog, etc) to ensure you have a field that contains your "source" identifier:
{
"message": "my super valuable logline",
"source": "my_kinda_awesome_app"
}
given ^^ you can identify all sources (and record counts!) with a terms aggregation like:
{
"aggs": {
"my_sources": {
"terms": { "field": "source" }
}
}
}
Kibana makes this all easier since you don't need to know/write ES queries and can do stuff visually.

How to structure Shopify data into a Firestore collection that can be queried efficiently

The Background
In an attempt to build some back-end services for my e-commerce (Shopify based) site I have set up a Firestore trigger that writes order details with every new order created which is updated by a web hook POST function provided by Shopify - (orders/Create webhook).
My current cloud function -
exports.saveOrderDetails = functions.https.onRequest((req, res) => {
var docRef = db.collection('orders').doc(req.body.name);
const details = req.body;
var setData = docRef.set(req.body).then( a =>{
res.status(200).send();
});
});
Which is able to capture the data from the webhook and store it in the order number's "name" document within my "orders" collection. This is how it looks in Firestore:
My question is - with the help of body-parser (already parsing out "name" which is represented as #9999 in my screenshot, to set my document name value) - how could I improve my cloud function to handle storing this webhook POST in a better data structure for Firestore and to query it later?
After reviewing the comments on this question, I moved this question over to Firebase-Talk and it appears the feature I am attempting here would be close to what is known as "collection group queries" and was informed I should adjust my data model approach since this feature is currently still on the road map - and perhaps look into the Firestore REST API as suggested by #jason-berryman
Besides the REST APi, #frank-van-puffelen made a great suggestion to look into working with Arrays, Lists, Sets for Firebase/Firestore
Another approach that could mitigate this in my scenario is to have my HTTP Firestore cloud trigger have multiple parsing arguments that create top more top level documents - however this could cause a point of scaling failure or an increase of cost factor due to putting more parsing processing logic in my cloud function and adding additional latency...
I will mark my question as answered for the time being to hopefully help others to understand how to work with documents in a single collection in Firestore and not attempt to query groups of collections before they get too far into modelling and need to restructure their app.

Updating a Lotus Notes rich text field via AJAX/Domino Data Service API

I've been tasked with updating a legacy Notes application - I'd rather do basically anything else, but such is life. As per the API, I should be able to update a rich text field if the data is in format
FieldName: { contentType: 'text/html', data: newData, type: 'richtext' }
(serialized to JSON, of course)
But what happens is that the original RT field gets replaced by three MIME Part fields (with the same name, containing what you'd expect, "Content-Type: multipart/mixed", "boundary" and so on), and "newData" gets stored in a $FILE attachment. And also few MIME-specific fields get added to the document ($MIMETrack, $NoteHasNativeMIME, MIME_Version).
Now this certainly wouldn't be the first time that Notes documentation doesn't match actual functionality, but I was wondering if anyone has been successfully able to do this? Alternatively, any other way to update an RT field via AJAX (preferably with HTTP PATCH)?
EDIT: upon further inspection, this seems to be a configuration issue. I tried doing a GET from a document with an RT field (that contains the text "testing rt field", submitted via a regular web form), the expected result would be according to the API
"FieldName": {
"contentType":"text/html",
"data":"testing rt field",
"type":"richtext"
}
but instead what is returned is
"FieldName": {
"type":"multipart",
"content": [
{
"contentType":"multipart\/alternative; Boundary=\"0__=4DBB0A82DFA47A268f9e8a93df938690918c4DBB0A82DFA47A26\"",
"contentDisposition":"inline"
},
{
"contentType":"text\/plain; charset=US-ASCII",
"data":"testing rt field",
"boundary":"--0__=4DBB0A82DFA47A268f9e8a93df938690918c4DBB0A82DFA47A26"
},
{
"contentType":"text\/html; charset=US-ASCII",
"contentDisposition":"inline",
"data":"<html><body><font size=\"2\" face=\"sans-serif\">testing rt field<\/font><\/body><\/html>",
"boundary":"--0__=4DBB0A82DFA47A268f9e8a93df938690918c4DBB0A82DFA47A26"
}
]
}
(sorry for the formatting)
So I'm guessing there is a problem with our Domino configuration somewhere. Where, I have no idea, any tips would be greatly appreciated.
In Domino 8.5.3 UP1 the data service represented rich text fields as one HTML part ("type": "richtext" as described in the 8.5.3 doc). This had some severe limitations. For example, you couldn't create a rich text field with embedded images and attachments.
Since Domino 9.0 the data service represents rich text fields as multiple parts ("type": "multipart" as described in the 9.0.1 doc). However, you can still PUT and POST the old rich text format, and you can GET the old format by specifying multipart=false in the URL. In other words, the original poster seems to be using Domino 9.0 and it is working as expected.

Carrot2+ElasticSearch Basic Flow of Information

I am using Carrot2 and ElasticSearch. I has elastic search server running with a lot of data when I installed carrot2 plugin.
Wanted to get answers to a few basic questions:
Will clustering work only on newly indexed documents or even old documents?
How can I specify which fields to look at for clustering?
The curl command is working and giving some results. How can I get the curl command which takes a JSON as input to a REST API url of the form localhost:9200/article-index/article/_search_with_clusters?.....
Appreciate any help.
Yes, if you want to use the plugin straight off the ES installation, you need to make REST calls of your own. I believe you are using Python. Take a look at requests. It is a delightful REST tool for python.
To make POST requests you can do the following :
import json
url = 'localhost:9200/article-index/article/_search_with_clusters'
payload = {'some': 'data'}
r = requests.post(url, data=json.dumps(payload))
print r.text
Find more information at requests documentation.
Will clustering work only on newly indexed documents or even old
documents?
It will work even on old documments
How can I specify which fields to look at for clustering?
Here's an example using the shakepspeare dataset. The query is which of shakespeare's plays are about war?
$ curl -XPOST http://localhost:9200/shakespeare/_search_with_clusters?pretty -d '
{
"search_request": {
"query": {"match" : { "_all": "war" }},
"size": 100
},
"max_hits": 0,
"query_hint": "war",
"field_mapping": {
"title": ["_source.play_name"],
"content": ["_source.text_entry"]
},
"algorithm": "lingo"
}'
Running this you'll get back plays like Richard, Henry... The title is what carrot2 uses to develop the cluster names and the text entry is what it uses to make the clusters.
The curl command is working and giving some results. How can I get the
curl command which takes a JSON as input to a REST API url of the form
localhost:9200/article-index/article/_search_with_clusters?.....
Typically use the elasticsearch client libraries for your language of choice.

Resources