Google search API, as of this morning ( 7/30/2020 ) the Google API custom search with searchtype = image returns without the items[] array. I verified my CSE setting that includes image search = enabled and search the whole web; As nothing changed on my side and it all worked for several years until this morning i'm trying to find what has changed / happen;
The type of search I'm executing is like below:
for example:
"request": [
{
"title": "Google Custom Search - ocean",
"totalResults": "360000",
"searchTerms": "ocean",
"count": 10,
"startIndex": 1,
"inputEncoding": "utf8",
"outputEncoding": "utf8",
"safe": "off",
"cx": "XXX My Own Instance xxx",
"searchType": "image",
"imgSize": "xxlarge"
}
It returns all metadata with number of results and search time etc. but no items, so the my usual query that worked up to now, can't use the fields = kind,items(title,link,snippet) parameter ..
Now 8 hours later I tried again to execute a query with 'searchType=image" and it works;
From my Angular application as well as Python script and the REST testing tools.
I'm guessing Google had some downtime or server malfunction and then they fixed it.
Appreciate if anyone has confirmation of the scenario I'm suspecting
Related
I am trying to implement a full-text search for Neptune DB using elasticsearch manually but getting this error :
{"requestId":"bcb16f6b-7e60-4e71-b0d8-a6a4a9b38b00","code":"MalformedQueryException","detailedMessage":"Failed to interpret Gremlin query: null"}
Here is my document:
{
"entity_id": "f8b9726f-74f9-a0e0-5fbd-b609bbb14f89",
"entity_type": [
"suggestions"
],
"document_type": "vertex",
"predicates": {
"title": {
"value": "samsung mobile"
}
}
}
query:
g.withSideEffect('Neptune#fts.endpoint','elasticsearch cluster end point').withSideEffect('Neptune#fts.queryType', 'match').V().has('title','Neptune#fts samsung').local(values('title').fold()).limit(5).valueMap().toList()
it is giving error only if I am putting an existing word in search i.e Samsung but if I am searching for an unavailable word it worked fine not throwing any error.
Not sure what is wrong here, can anyone help me with this?
The local step you showed will, for each 'title' property found, create a list with that property in it. Without the local step all values found would be wrapped into a single list if you just did values('title').fold() .
Note, however, and this is probably why your query was failing, that you cannot add a valueMap step after that local step as you would be trying to apply valueMap not to vertices but to one or more lists of strings coming out of the local step.
I am using the FB Graph API and not getting all the events. I know through searching this forum that only events that are 'hosted by' the group come through but I am not event getting all the events 'hosted by' the groups.
For example, I am trying to get the events from https://www.facebook.com/pg/goldenlionbristol/events/. The event on 1st Nov 2017, https://www.facebook.com/events/315066482344970/ is hosted by The Golden Lion but is not coming through. Nether is the one on 2nd November 2017, https://www.facebook.com/events/153007618626727/, which is also hosted by The Golden Lion.
The code (ruby) I am using
Koala.config.api_version = 'v2.10'
oauth = Koala::Facebook::OAuth.new app_id, app_secret
graph = Koala::Facebook::API.new oauth.get_app_access_token
fb_events = graph.get_object( fb_venue["url_listings"] )
For the example above fb_venue["url_listings"] = 'goldenlionbristol/events'. This is working fine for other groups (i.e. https://www.facebook.com/pg/MrWolfs/events/)
Events are not returned in order of which is closest to the current date (which a simple look at the result should have made obvious), and with the default limit of 25, the first event currently simply is on the second page of results.
The second event is actually a repeating event. Just look at the return for that one individually by id:
"start_time": "2017-10-05T21:00:00+0100",
"event_times": [
{
"id": "153007635293392",
"start_time": "2017-11-02T21:00:00+0000",
"end_time": "2017-11-02T23:00:00+0000"
},
{
"id": "153007631960059",
"start_time": "2017-10-05T21:00:00+0100",
"end_time": "2017-10-05T23:00:00+0100"
},
{
"id": "153007628626726",
"start_time": "2017-12-07T21:00:00+0000",
"end_time": "2017-12-07T23:00:00+0000"
}
],
So that has two upcoming "occurrences" on Nov 2nd and Dec 7th. Oh, and it already had one on Oct 5th ... and that one shows up on the second page of results as well.
You will have to get to the repeated occurrences via the "original" event, and have to go back in events until you catch it. In the API reference for the event object there's no mention of repeating events, so it looks like it has not kept up with the evolution of UI features in that regard.
Edit: I found the answer, see below for Logstash <= 2.0 ===>
Plugin created for Logstash 2.0
Whomever is interested in this with Logstash 2.0 or above, I created a plugin that makes this dead simple:
The GEM is here:
https://rubygems.org/gems/logstash-filter-dateparts
Here is the documentation and source code:
https://github.com/mikebski/logstash-datepart-plugin
I've got a bunch of data in Logstash with a #Timestamp for a range of a couple of weeks. I have a duration field that is a number field, and I can do a date histogram. I would like to do a histogram over hour of day, rather than a linear histogram from x -> y dates. I would like the x axis to be 0 -> 23 instead of date x -> date y.
I think I can use the JSON Input advanced text input to add a field to the result set which is the hour of day of the #timestamp. The help text says:
Any JSON formatted properties you add here will be merged with the elasticsearch aggregation definition for this section. For example shard_size on a terms aggregation which leads me to believe it can be done but does not give any examples.
Edited to add:
I have tried setting up an entry in the scripted fields based on the link below, but it will not work like the examples on their blog with 4.1. The following script gives an error when trying to add a field with format number and name test_day_of_week: Integer.parseInt("1234")
The problem looks like the scripting is not very robust. Oddly enough, I want to do exactly what they are doing in the examples (add fields for day of month, day of week, etc...). I can get the field to work if the script is doc['#timestamp'], but I cannot manipulate the timestamp.
The docs say Lucene expressions are allowed and show some trig and GCD examples for GIS type stuff, but nothing for date...
There is this update to the BLOG:
UPDATE: As a security precaution, starting with version 4.0.0-RC1,
Kibana scripted fields default to Lucene Expressions, not Groovy, as
the scripting language. Since Lucene Expressions only support
operations on numerical fields, the example below dealing with date
math does not work in Kibana 4.0.0-RC1+ versions.
There is no suggestion for how to actually do this now. I guess I could go off and enable the Groovy plugin...
Any ideas?
EDIT - THE SOLUTION:
I added a filter using Ruby to do this, and it was pretty simple:
Basically, in a ruby script you can access event['field'] and you can create new ones. I use the Ruby time bits to create new fields based on the #timestamp for the event.
ruby {
code => "ts = event['#timestamp']; event['weekday'] = ts.wday; event['hour'] = ts.hour; event['minute'] = ts.min; event['second'] = ts.sec; event['mday'] = ts.day; event['yday'] = ts.yday; event['month'] = ts.month;"
}
This no longer appears to work in Logstash 1.5.4 - the Ruby date elements appear to be unavailable, and this then throws a "rubyexception" and does not add the fields to the logstash events.
I've spent some time searching for a way to recover the functionality we had in the Groovy scripted fields, which are unavailable for scripting dynamically, to provide me with fields such as "hourofday", "dayofweek", et cetera. What I've done is to add these as groovy script files directly on the Elasticsearch nodes themselves, like so:
/etc/elasticsearch/scripts/
hourofday.groovy
dayofweek.groovy
weekofyear.groovy
... and so on.
Those script files contain a single line of Groovy, like so:
Integer.parseInt(new Date(doc["#timestamp"].value).format("d")) (dayofmonth)
Integer.parseInt(new Date(doc["#timestamp"].value).format("u")) (dayofweek)
To reference these in Kibana, firstly create a new search and save it, or choose one of your existing saved searches (Please take a copy of the existing JSON before you change it, just in case) in the "Settings -> Saved Objects -> Searches" page. You then modify the query to add "Script Fields" in, so you get something like this:
{
"query" : {
...
},
"script_fields": {
"minuteofhour": {
"script_file": "minuteofhour"
},
"hourofday": {
"script_file": "hourofday"
},
"dayofweek": {
"script_file": "dayofweek"
},
"dayofmonth": {
"script_file": "dayofmonth"
},
"dayofyear": {
"script_file": "dayofyear"
},
"weekofmonth": {
"script_file": "weekofmonth"
},
"weekofyear": {
"script_file": "weekofyear"
},
"monthofyear": {
"script_file": "monthofyear"
}
}
}
As shown, the "script_fields" line should fall outside the "query" itself, or you will get an error. Also ensure the script files are available to all your Elasticsearch nodes.
I'm experimenting with the language_id.txt dataset from the Google Prediction example. Right now I'm trying to update the model with the following method:
def update(label, data)
input = #prediction.trainedmodels.update.request_schema.new
input.label = label
input.csv_instance = [data]
result = #client.execute(
:api_method => #prediction.trainedmodels.update,
:parameters => {'id' => MODEL_ID},
:headers => {'Content-Type' => 'application/json'},
:body_object => input
)
assemble_json_body(result)
end
(This method is based on some Google sample code.)
My problem is that these updates have no effect. Here are the scores for This is a test sentence. regardless of how many updates I run:
{
"response":{
"kind":"prediction#output",
"id":"mymodel",
"selfLink":"https://www.googleapis.com/prediction/v1.5/trainedmodels/mymodel/predict",
"outputLabel":"English",
"outputMulti":[
{
"label":"English",
"score":0.420937
},
{
"label":"French",
"score":0.273789
},
{
"label":"Spanish",
"score":0.305274
}
]
},
"status":"success"
}
Per the disclaimer at the bottom of "Creating a Sentiment Analysis Model", I have made sure to update at least 100 times before expecting any changes. First, I tried using a single sentence and updating it 1000 times. Second, I tried using ~150 unique sentences drawn from Simple Wikipedia and updated with each once. Each update was "successful":
{"response":{"kind":"prediction#training","id":"mymodel","selfLink":"https://www.googleapis.com/prediction/v1.5/trainedmodels/mymodel"},"status":"success"}
but neither approach changed my results.
I've also tried using the APIs Explorer (Prediction, v1.5) and updating ~300 times that way. There's still no difference in my results. Those updates were also "successful".
200 OK
{
"kind": "prediction#training",
"id": "mymodel",
"selfLink": "https://www.googleapis.com/prediction/v1.5/trainedmodels/mymodel"
}
I am quite sure that the model is receiving these updates. get and analyze both show that the model has numberInstances": "2024". Oddly, though, list shows that the model has "numberInstances": "406".
At this point, I don't know what could be causing this issue.
2019 Update
Based on the comment from Jochem Schulenklopper that the API was shut down in April 2018.
Developers who choose to move to the Google Cloud Machine Learning Engine will have to recreate their existing Prediction API models.
Machine Learning API examples:
https://github.com/GoogleCloudPlatform/cloudml-samples
I am using Codeigniter and Alex Bilbie's MongoDB library.
In my API that I am developing users can upload images and other users can comment on them.
I have chosen to include the comments as sub documents to the images.
Each comment contains:
Fullname (of author)
Comment
Created_at
So in other words. The users full name is "hard coded" into each comment so if they
later decides to change their names I have a problem.
I read that I can use atomic updates to update all occurrences of the name (like in comments) but how can I do this using Alex´s library? Can I update all places where the name is wrong?
UPDATE
This is how the image document looks like with the comments.
I think that it is pretty strange that MongoDB encourage the use of subdocuments but then does not include a way to update multiple items in an array.
{
"_id": ObjectId("4e9ead773dc793dc01020000"),
"description": "An image",
"category": "accident",
"comments": [
{
"id": ObjectId("4e96bd063dc7937202000000"),
"fullname": "James Bond",
"comment": "This is a comment.",
"created_at": "2011-10-19 13:02:40"
}
],
"created_at": "2011-10-19 12:59:03"
}
Thankful for all help!
I am not familiar with codeignitor, but mb mongodb shell syntax will help you:
db.comments.update( {"Fullname":"Andrew Orsich"},
{ $set : { Fullname: "New name"} }, false, true )
Last true flag indicate that you want update multiple documents. So it is possible to update all comments in one update operation.
BTW: denormalazing (not 'hard coding') data in mongodb and nosql in general is usual operation. Also operation that require update a lot of documents usually work async. But it is up to you.
Update:
db.comments.update( {"comments.Fullname":"Andrew Orsich"},
{ $set : { comments.$.Fullname: "New name"} }, false, true )
But, above query will update full name in first comment on nested array. If you need to affect changes to more than one array element you will need to use multiple update statements.