My code is displayed below.
client = Elasticsearch(url, http_auth = (username, password), verify_certs = False, read_timeout=50, terminate_after=25000)
examplename = 'GEOS.fp.asm.inst1_2d_smp_Nx.20180508_1700.V01.nc4'
s = Search(using = client, index = [set_index]).source(['metadata.Filename'])\
.query('match', Filename={examplename})
total = s.count()
The error message is:
elasticsearch.exceptions.SerializationError: ({'query': {'match': {'Filename': set(['GEOS.fp.asm.inst1_2d_smp_Nx.20180508_1700.V01.nc4'])}}}, TypeError("Unable to serialize set(['GEOS.fp.asm.inst1_2d_smp_Nx.20180508_1700.V01.nc4']) (type: <type 'set'>)",))
In general, I don't need my search term to match the whole document. So for example if the document is named GEOS.fp.asm.inst1_2d_smp_Nx.20180508_1700.V01.nc4, I want that document to be returned if I query for 20180508.
in .query('match', Filename={examplename}) you are passing in a set which is not json serializable. I believe it should have been just .query('match', Filename=examplename). Hope this helps!
Related
After running my code to retrieve a report with the Google Analytics Data gem, the response is not returning all the rows as I would expect. Instead it leaves out the empty rows and only returns ones with data in this case. This is the code I am running, the property and analytics account are models to store access to the account info to pass to the api.
property = my_analytics_property
analytics_account = property.google_analytics_account
analytics_data_service = Google::Analytics::Data.analytics_data do |config|
config.credentials = analytics_account.google_account_oauth.credentials
end
date_dimension = ::Google::Analytics::Data::V1beta::Dimension.new(name: "date")
bounce_rate = ::Google::Analytics::Data::V1beta::Metric.new(name: "bounceRate")
page_views = ::Google::Analytics::Data::V1beta::Metric.new(name: "screenPageViews")
avg_session_duration = ::Google::Analytics::Data::V1beta::Metric.new(name: "averageSessionDuration")
date_range = ::Google::Analytics::Data::V1beta::DateRange.new(start_date: 1.month.ago.to_date.to_s, end_date: Date.current.to_s)
request = ::Google::Analytics::Data::V1beta::RunReportRequest.new(
property: "properties/#{property.remote_property_id}",
metrics: [bounce_rate, page_views, avg_session_duration],
dimensions: [date_dimension],
date_ranges: [date_range],
keep_empty_rows: true
)
response = analytics_data_service.run_report request
replacement_requests = [
Google::Apis::DocsV1::ReplaceAllTextRequest.new(contains_text: "{{name}}", replace_text: "Joe"),
Google::Apis::DocsV1::ReplaceAllTextRequest.new(contains_text: "{{age}}", replace_text: "34"),
Google::Apis::DocsV1::ReplaceAllTextRequest.new(contains_text: "{{address}}", replace_text: "Westwood"),
]
batch_request = Google::Apis::DocsV1::BatchUpdateDocumentRequest.new(requests: replacement_requests)
Given the above code, when I pass this BatcUpdateDocumentRequest instance into my service.batch_update_document function, I receive a 400 bad request. This seems to be related to the way the batch request is being serialized.
To illustrate, if we call batch_request.to_json we receive the following:
"{\"requests\":[{},{},{}]}"
This tells me that something is going wrong during serialization, however my code seems rather canonical.
Any thoughts on why my requests are failing to be serialized?
You want to use the replaceAllText request using google-api-client with ruby.
You have already been able to put and get values for Google Document using Google Docs API.
If my understanding is correct, how about this modification? In your script, the created request body is {"requests":[{},{},{}]}. By this, the error occurs. Please modify the script as follows.
Modification points:
Use Google::Apis::DocsV1::SubstringMatchCriteria for contains_text of Google::Apis::DocsV1::ReplaceAllTextRequest
Use Google::Apis::DocsV1::Request for Google::Apis::DocsV1::ReplaceAllTextRequest.
By above modification, the request body is created.
Modified script:
text1 = Google::Apis::DocsV1::SubstringMatchCriteria.new(text: "{{name}}")
text2 = Google::Apis::DocsV1::SubstringMatchCriteria.new(text: "{{age}}")
text3 = Google::Apis::DocsV1::SubstringMatchCriteria.new(text: "{{address}}")
req1 = Google::Apis::DocsV1::ReplaceAllTextRequest.new(contains_text: text1 , replace_text: "Joe")
req2 = Google::Apis::DocsV1::ReplaceAllTextRequest.new(contains_text: text2, replace_text: "34")
req3 = Google::Apis::DocsV1::ReplaceAllTextRequest.new(contains_text: text3, replace_text: "Westwood")
replacement_requests = [
Google::Apis::DocsV1::Request.new(replace_all_text: req1),
Google::Apis::DocsV1::Request.new(replace_all_text: req2),
Google::Apis::DocsV1::Request.new(replace_all_text: req3)
]
batch_request = Google::Apis::DocsV1::BatchUpdateDocumentRequest.new(requests: replacement_requests)
# result = service.batch_update_document(document_id, batch_request) # When you request with "batch_request", you can use this.
Request body:
When above script is run, the following request body is created.
{"requests":[
{"replaceAllText":{"containsText":{"text":"{{name}}"},"replaceText":"Joe"}},
{"replaceAllText":{"containsText":{"text":"{{age}}"},"replaceText":"34"}},
{"replaceAllText":{"containsText":{"text":"{{address}}"},"replaceText":"Westwood"}}
]}
Note:
When the error related to the authorization occurs, please confirm the scopes and whether Docs API has been enabled.
References:
Method: documents.batchUpdate
ReplaceAllTextRequest
If this didn't work, I apologize.
I am trying to integrate QnAmaker knowledge base with Azure Bot Service.
I am unable to find knowledge base id on QnAMaker portal.
How to find the kbid in QnAPortal?
The Knowledge Base Id can be located in Settings under “Deployment details” in your knowledge base. It is the guid that is nestled between “knowledgebases” and “generateAnswer” in the POST (see image below).
Hope of help!
Hey you can also use python to get this by take a look at the following code.
That is if you wanted to write a program to dynamically get the kb ids.
import http.client, os, urllib.parse, json, time, sys
# Represents the various elements used to create HTTP request path for QnA Maker
operations.
# Replace this with a valid subscription key.
# User host = '<your-resource-name>.cognitiveservices.azure.com'
host = '<your-resource-name>.cognitiveservices.azure.com'
subscription_key = '<QnA-Key>'
get_kb_method = '/qnamaker/v4.0/knowledgebases/'
try:
headers = {
'Ocp-Apim-Subscription-Key': subscription_key,
'Content-Type': 'application/json'
}
conn = http.client.HTTPSConnection(host)
conn.request ("GET", get_kb_method, None, headers)
response = conn.getresponse()
data = response.read().decode("UTF-8")
result = None
if len(data) > 0:
result = json.loads(data)
print
#print(json.dumps(result, sort_keys=True, indent=2))
# Note status code 204 means success.
KB_id = result["knowledgebases"][0]["id"]
print(response.status)
print(KB_id)
except :
print ("Unexpected error:", sys.exc_info()[0])
print ("Unexpected error:", sys.exc_info()[1])
I have some indexes with name test-1-in, test-2-in, test-3-in. I want to do _cat/indices/test-*-in from JAVA API. How to do this?
I tried using the IndexAdminClient but no luck.
Given an ElasticSearch Client object:
client.admin().indices()
.getIndex(new GetIndexRequest().indices("regex-*"))
.actionGet().getIndices();
In addition to Mario's answer, use the following to retrieve the indices with the Elasticsearch 6.4.0 high level REST client:
GetIndexRequest request = new GetIndexRequest().indices("*");
GetIndexResponse response = client.indices().get(request, RequestOptions.DEFAULT);
String[] indices = response.getIndices();
I have a solution:
final ClusterStateRequest clusterStateRequest = new ClusterStateRequest();
clusterStateRequest.clear().metaData(true);
final IndicesOptions strictExpandIndicesOptions = IndicesOptions.strictExpand();
clusterStateRequest.indicesOptions(strictExpandIndicesOptions);
ClusterStateResponse clusterStateResponse = client.admin().cluster().state(clusterStateRequest).get();
clusterStateResponse.getState().getMetadata().getIndices()
This will give all indexes. After that the reg ex matching has to be done manually. This is what is done for _cat implementation in elasticsearch source code.
In case you want to cat indices with ?v option:
IndicesStatsRequestBuilder indicesStatsRequestBuilder = new
IndicesStatsRequestBuilder(client, IndicesStatsAction.INSTANCE);
IndicesStatsResponse response = indicesStatsRequestBuilder.execute().actionGet();
for (Map.Entry<String, IndexStats> m : response.getIndices().entrySet()) {
System.out.println(m);
}
Each of the entries contains document count, storage usage, etc. You can run this for all or filtering some index.
PD: Tested with 5.6.0 version
Following on from suggestions, I am trying to use List.GetItems(Query) to retrieve my initial data subset rather than the entire list contents via List.Items. However, whereas List.Items.Cast() results in a usable IEnumerable for Linq, List.GetItems(Query).Cast() does not.
Working Code:
IEnumerable<SPListItem> results = SPContext.Current.Web.Lists[ListName].Items.Cast<SPListItem>().Where(item => item["Date"] != null).Where(item => DateTime.Parse(item["Date"].ToString()) >= StartDate).Where(item => DateTime.Parse(item["Date"].ToString()) <= EndDate);
MessageLine = results.Count().ToString();
Non-working Code:
string SPStartDate = SPUtility.CreateISO8601DateTimeFromSystemDateTime(this.StartDate);
string SPEndDate = SPUtility.CreateISO8601DateTimeFromSystemDateTime(this.EndDate);
SPQuery MyQuery = new SPQuery();
MyQuery.Query = "<Where><And><And><Geq><FieldRef Name='Date'/><Value Type='DateTime'>" + SPStartDate + "</Value></Geq><Leq><FieldRef Name='Date'/><Value Type='DateTime'>" + SPEndDate + "</Value></Leq></And></Where>";
IEnumerable<SPListItem> results = SPContext.Current.Web.Lists[ListName].GetItems(MyQuery).Cast<SPListItem>();
MessageLine = results.Count().ToString();
The List.GetItems(Query).Cast() method produces the following Exception on the .Count() line:
Microsoft.SharePoint.SPException:
Cannot complete this action. Please
try again. --->
System.Runtime.InteropServices.COMException
(0x80004005): Cannot complete this
action. Please try again. at
Microsoft.SharePoint.Library.SPRequestInternalClass.GetListItemDataWithCallback(String
bstrUrl, String bstrListName, String
bstrViewName, String bstrViewXml,
SAFEARRAYFLAGS fSafeArrayFlags,
ISP2DSafeArrayWriter pSACallback,
ISPDataCallback pPagingCallback,
ISPDataCallback pSchemaCallback) at
Microsoft.SharePoint.Library.SPRequest.GetListItemDataWithCallback(String
bstrUrl, String bstrListName, String
bstrViewName, String bstrViewXml,
SAFEARRAYFLAGS fSafeArrayFlags,
ISP2DSafeArrayWriter pSACallback,
ISPDataCallback pPagingCallback,
ISPDataCallback pSchemaCallback) ---
End of inner exception stack trace ---
at
Microsoft.SharePoint.Library.SPRequest.GetListItemDataWithCallback(String
bstrUrl, String bstrListName, String
bstrViewName, String bstrViewXml,
SAFEARRAYFLAGS fSafeArrayFlags,
ISP2DSafeArrayWriter pSACallback,
ISPDataCallback pPagingCallback,
ISPDataCallback pSchemaCallback) at
Microsoft.SharePoint.SPListItemCollection.EnsureListItemsData()
at
Microsoft.SharePoint.SPListItemCollection.Undirty()
at
Microsoft.SharePoint.SPBaseCollection.System.Collections.IEnumerable.GetEnumerator()
at
System.Linq.Enumerable.d__aa1.MoveNext()
at
System.Linq.Enumerable.Count[TSource](IEnumerable1
source) at
Test.GetTransactionsInPeriod() at
Test.CreateChildControls()
Can anyone suggest anything?
From the error message it looks like the CAML Query is wrong. You may want to run it through something like U2U's CAML Query Builder to double check. The error message is thrown by SharePoint before the requested casts. Glancing over it, I think you have an extra <And> at the beginning (<Where><And><And>)
By the way: Don't use SPWeb.Lists[Name]. This will load every list in the SPWeb (including Metadata), which is rather resource intensive. One of the SPWeb.GetList or SPWeb.Lists.GetList methods is better.