I am using the FB Graph API and not getting all the events. I know through searching this forum that only events that are 'hosted by' the group come through but I am not event getting all the events 'hosted by' the groups.
For example, I am trying to get the events from https://www.facebook.com/pg/goldenlionbristol/events/. The event on 1st Nov 2017, https://www.facebook.com/events/315066482344970/ is hosted by The Golden Lion but is not coming through. Nether is the one on 2nd November 2017, https://www.facebook.com/events/153007618626727/, which is also hosted by The Golden Lion.
The code (ruby) I am using
Koala.config.api_version = 'v2.10'
oauth = Koala::Facebook::OAuth.new app_id, app_secret
graph = Koala::Facebook::API.new oauth.get_app_access_token
fb_events = graph.get_object( fb_venue["url_listings"] )
For the example above fb_venue["url_listings"] = 'goldenlionbristol/events'. This is working fine for other groups (i.e. https://www.facebook.com/pg/MrWolfs/events/)
Events are not returned in order of which is closest to the current date (which a simple look at the result should have made obvious), and with the default limit of 25, the first event currently simply is on the second page of results.
The second event is actually a repeating event. Just look at the return for that one individually by id:
"start_time": "2017-10-05T21:00:00+0100",
"event_times": [
{
"id": "153007635293392",
"start_time": "2017-11-02T21:00:00+0000",
"end_time": "2017-11-02T23:00:00+0000"
},
{
"id": "153007631960059",
"start_time": "2017-10-05T21:00:00+0100",
"end_time": "2017-10-05T23:00:00+0100"
},
{
"id": "153007628626726",
"start_time": "2017-12-07T21:00:00+0000",
"end_time": "2017-12-07T23:00:00+0000"
}
],
So that has two upcoming "occurrences" on Nov 2nd and Dec 7th. Oh, and it already had one on Oct 5th ... and that one shows up on the second page of results as well.
You will have to get to the repeated occurrences via the "original" event, and have to go back in events until you catch it. In the API reference for the event object there's no mention of repeating events, so it looks like it has not kept up with the evolution of UI features in that regard.
Related
Google search API, as of this morning ( 7/30/2020 ) the Google API custom search with searchtype = image returns without the items[] array. I verified my CSE setting that includes image search = enabled and search the whole web; As nothing changed on my side and it all worked for several years until this morning i'm trying to find what has changed / happen;
The type of search I'm executing is like below:
for example:
"request": [
{
"title": "Google Custom Search - ocean",
"totalResults": "360000",
"searchTerms": "ocean",
"count": 10,
"startIndex": 1,
"inputEncoding": "utf8",
"outputEncoding": "utf8",
"safe": "off",
"cx": "XXX My Own Instance xxx",
"searchType": "image",
"imgSize": "xxlarge"
}
It returns all metadata with number of results and search time etc. but no items, so the my usual query that worked up to now, can't use the fields = kind,items(title,link,snippet) parameter ..
Now 8 hours later I tried again to execute a query with 'searchType=image" and it works;
From my Angular application as well as Python script and the REST testing tools.
I'm guessing Google had some downtime or server malfunction and then they fixed it.
Appreciate if anyone has confirmation of the scenario I'm suspecting
Imagine I have the following document:
{
"name": "Foo"
"age": 0
}
We receive events that trigger updates to these fields:
Event 1
{
"service_timestamp": "2019-09-15T09:00:01",
"updated_name": "Bar"
}
Event 2
{
"service_timestamp": "2019-09-15T09:00:02",
"updated_name": "Foo"
}
Event 2 was published by our service 1 second later than Event 1, so we would expect our document to first update the "name" property to "Bar", then back to "Foo". However, imagine that for whatever reason these events hit out of order (Event 2 THEN Event 1). The final state of the document will be "Bar", which is not the desired behavior.
We need to guarantee that we update our document in the order of the "service_timestamp" field on the event.
One solution we came up with is to have an additional last_updated_property on each field like so:
{
"name": {
"value": "Foo",
"last_updated_time": 1970-01-01T00:00:00
}
"age": {
"value": 0,
"last_updated_time": 1970-01-01T00:00:00
}
}
We would then only update the property if the service_timestamp of the event occurs after the last_updated_time of the property in the document:
{
"script": {
"source": "if (ctx._source.name.last_updated_time < event.service_timestamp) {
ctx._source.name.value = event.updated_name;
ctx._source.name.last_updated_time = event.service_timestamp;
}"
}
}
While this would work, it seems costly to perform a read, then a write on each update. Are there any other ways to guarantee events update in the correct order?
Edit 1: Some other things to consider
We cannot assume out-of-order events will occur in a small time window. Imagine the following: we attempt to update a customer's name, but this update fails, so we store the update event in some dead letter queue with the intention of refiring it later. We fix the bug that caused the update to fail, and refire all events in the dead letter queue. If no updates occurred that update the name field during the time we were fixing this bug, then the event in the dead letter queue should successfully update the property. However, if some events did update the name, the event in the dead letter queue should not update the property.
Everything Mousa said is correct wrt "Internal" versioning, which is where you let Elasticsearch handle incrementing the version.
However, Elasticsearch also supports "External" versioning, where you can provide a version with each update that gets checked against the current doc's version. I believe this would solve your case of events indexing to ES "out of order", and would prevent those issues across any timeframe of events (whether 1 second or 1 week apart, as in you dead letter queue example).
To do this, you'd track the version of documents in your primary datastore (Elasticsearch should never be a primary datastore!), and attach it to indexing requests.
First you'd create your doc with any version number you want, let's start with 1:
POST localhost:9200/my-index/my-type/<doc id>?version=1&version_type=external -d
{
"name": "Foo"
"age": 0
}
Then the updates would also get assigned versions from your service and/or primary datastore
Event 1
POST localhost:9200/my-index/my-type/<doc id>?version=2&version_type=external -d
{
"service_timestamp": "2019-09-15T09:00:01",
"updated_name": "Bar"
}
Event 2
POST localhost:9200/my-index/my-type/<doc id>?version=3&version_type=external -d
{
"service_timestamp": "2019-09-15T09:00:02",
"updated_name": "Foo"
}
This ensures that even if the updates are applied out of order the most recent one wins. If Event 1 is applied after event 2, you'd get a 409 error code that represents a VersionConflictEngineException, and most importantly Event 1 would NOT override event 2.
Instead of incrementing a version int by 1 each time, you could choose to convert your timestamps to epoch millis and provide that as the version - similar to your idea of creating a last_updated_property field, but taking advantage of Elasticsearch's built in versioning. That way, the most recently timestamped update will always "win" and be applied last.
I highly recommend you read this short blog post on Elasticsearch versioning - it goes into way more detail than I did here: https://www.elastic.co/blog/elasticsearch-versioning-support.
Happy searching!
i have tried to find anything about my problem online but unfortunately i had no luck finding anything that would help me.
i have a custom javascript map built with mapbox-gl-js.
the map shows real estate objects which can be filtered by country, city etc. using a custom built filter bar. now on this filter bar there are dropdown fields and two input fields. the filters i built for the dropdown fields work just fine. in the code use equal-to-comparsions for this task.
the two input fields are there to filter the maximum price and a minimum square meters - here i use the less or equal than and greater or equal than comparsion filters.
for some reason these two filter's won't work as desired. for example if i filter for objects with max price 1000000 objects with greater price still show.
this is how my filter would look like in JSON:
["all",["<=","preis",1000000]]
this is how the features look like:
feature = {
"type": "Feature",
"geometry": {
"type": "Point",
"coordinates": [...]
},
"properties": {
[...],
"preis": 20000000,
[...]
}
}
i have also tried to reproduce this in a simple test map with simple objects - there the problem also exists.
does anyone have a clue why this is acting up on me or has anyone had or got an familiar issue?
thanks and br, John
I'm experimenting with the language_id.txt dataset from the Google Prediction example. Right now I'm trying to update the model with the following method:
def update(label, data)
input = #prediction.trainedmodels.update.request_schema.new
input.label = label
input.csv_instance = [data]
result = #client.execute(
:api_method => #prediction.trainedmodels.update,
:parameters => {'id' => MODEL_ID},
:headers => {'Content-Type' => 'application/json'},
:body_object => input
)
assemble_json_body(result)
end
(This method is based on some Google sample code.)
My problem is that these updates have no effect. Here are the scores for This is a test sentence. regardless of how many updates I run:
{
"response":{
"kind":"prediction#output",
"id":"mymodel",
"selfLink":"https://www.googleapis.com/prediction/v1.5/trainedmodels/mymodel/predict",
"outputLabel":"English",
"outputMulti":[
{
"label":"English",
"score":0.420937
},
{
"label":"French",
"score":0.273789
},
{
"label":"Spanish",
"score":0.305274
}
]
},
"status":"success"
}
Per the disclaimer at the bottom of "Creating a Sentiment Analysis Model", I have made sure to update at least 100 times before expecting any changes. First, I tried using a single sentence and updating it 1000 times. Second, I tried using ~150 unique sentences drawn from Simple Wikipedia and updated with each once. Each update was "successful":
{"response":{"kind":"prediction#training","id":"mymodel","selfLink":"https://www.googleapis.com/prediction/v1.5/trainedmodels/mymodel"},"status":"success"}
but neither approach changed my results.
I've also tried using the APIs Explorer (Prediction, v1.5) and updating ~300 times that way. There's still no difference in my results. Those updates were also "successful".
200 OK
{
"kind": "prediction#training",
"id": "mymodel",
"selfLink": "https://www.googleapis.com/prediction/v1.5/trainedmodels/mymodel"
}
I am quite sure that the model is receiving these updates. get and analyze both show that the model has numberInstances": "2024". Oddly, though, list shows that the model has "numberInstances": "406".
At this point, I don't know what could be causing this issue.
2019 Update
Based on the comment from Jochem Schulenklopper that the API was shut down in April 2018.
Developers who choose to move to the Google Cloud Machine Learning Engine will have to recreate their existing Prediction API models.
Machine Learning API examples:
https://github.com/GoogleCloudPlatform/cloudml-samples
This is on a recent version of couchbase server.
The end goal is for the reduce/groupby to aggregate the values of the duplicate keys in to a single row with an array value.
view result with no reduce/grouping (in reality there are maybe 50 rows like this emitted):
{
"total_rows": 3,
"offset": 0,
"rows": [
{
"id": "1806a62a75b82aa6071a8a7a95d1741d",
"key": "064b6b4b-8e08-4806-b095-9e59495ac050",
"value": "1806a62a75b82aa6071a8a7a95d1741d"
},
{
"id": "47abb54bf31d39946117f6bfd1b088af",
"key": "064b6b4b-8e08-4806-b095-9e59495ac050",
"value": "47abb54bf31d39946117f6bfd1b088af"
},
{
"id": "ed6a3dd3-27f9-4845-ac21-f8a5767ae90f",
"key": "064b6b4b-8e08-4806-b095-9e59495ac050",
"value": "ed6a3dd3-27f9-4845-ac21-f8a5767ae90f"
}
}
with reduce + group_level=1:
function(keys,values,re){
return values;
}
yields an error from couch with the actual 50 or so rows from the real view (even fails with fewer view rows). couch says something about the data not shrinking rapidly enough. However this same type of thing works JUST FINE when the view keys are integers and there is a small amount of data.
Can someone please explain the difference to me?
Reduce values need to remain as small as possible, due to the nature of how they are stored in the internal b-tree data format. There's a little bit of information in the wiki about why this is.
If you want to identify unique values, this needs to be done in your map function. This section on the same wiki page shows you one method you can use to do so. (I'm sure there are others)
I am almost always going to be querying this view with a "key" parameter, so there really is no need to aggregate values via couch, it can be easily and efficiently done in the app.