I was using elasticsearch 6.2.2. and this is how I convert json string to Xcontentbuilder.
XContentBuilder builder = JsonXContent.contentBuilder().prettyPrint();
XContentParser parser = JsonXContent.jsonXContent.createParser(NamedXContentRegistry.EMPTY, jsonObj.toString());
builder.copyCurrentStructure(parser);
I worked well until I updated elasticsearch 6.3+.
There is error on ES 6.3+ with same code.
Description Resource Path Location Type The method
createParser(NamedXContentRegistry, DeprecationHandler, String) in the
type JsonXContent is not applicable for the arguments
(NamedXContentRegistry, String) test.java
The Compile Error has called out: your createParser miss a DeprecationHandler parameter.
So you should set the DeprecationHandler, for example:
JsonXContent.jsonXContent.createParser(NamedXContentRegistry.EMPTY,
LoggingDeprecationHandler.INSTANCE,
jsonObj.toString());
Related
I am receiving JSON from a http terraform data source
data "http" "example" {
url = "${var.cloudwatch_endpoint}/api/v0/components"
# Optional request headers
request_headers {
"Accept" = "application/json"
"X-Api-Key" = "${var.api_key}"
}
}
It outputs the following.
http = [{"componentID":"k8QEbeuHdDnU","name":"Jenkins","description":"","status":"Partial Outage","order":1553796836},{"componentID":"ui","name":"ui","description":"","status":"Operational","order":1554483781},{"componentID":"auth","name":"auth","description":"","status":"Operational","order":1554483781},{"componentID":"elig","name":"elig","description":"","status":"Operational","order":1554483781},{"componentID":"kong","name":"kong","description":"","status":"Operational","order":1554483781}]
which is a string in terraform. In order to convert this string into JSON I pass it to an external data source which is a simple ruby function. Here is the terraform to pass it.
data "external" "component_ids" {
program = ["ruby", "./fetchComponent.rb",]
query = {
data = "${data.http.example.body}"
}
}
Here is the ruby function
#!/usr/bin/env ruby
require 'json'
data = JSON.parse(STDIN.read)
results = data.to_json
STDOUT.write results
All of this works. The external data outputs the following (It appears the same as the http output) but according to terraform docs this should be a map
external1 = {
data = [{"componentID":"k8QEbeuHdDnU","name":"Jenkins","description":"","status":"Partial Outage","order":1553796836},{"componentID":"ui","name":"ui","description":"","status":"Operational","order":1554483781},{"componentID":"auth","name":"auth","description":"","status":"Operational","order":1554483781},{"componentID":"elig","name":"elig","description":"","status":"Operational","order":1554483781},{"componentID":"kong","name":"kong","description":"","status":"Operational","order":1554483781}]
}
I was expecting that I could now access data inside of the external data source. I am unable.
Ultimately what I want to do is create a list of the componentID variables which are located within the external data source.
Some things I have tried
* output.external: key "0" does not exist in map data.external.component_ids.result in:
${data.external.component_ids.result[0]}
* output.external: At column 3, line 1: element: argument 1 should be type list, got type string in:
${element(data.external.component_ids.result["componentID"],0)}
* output.external: key "componentID" does not exist in map data.external.component_ids.result in:
${data.external.component_ids.result["componentID"]}
ternal: lookup: lookup failed to find 'componentID' in:
${lookup(data.external.component_ids.*.result[0], "componentID")}
I appreciate the help.
can't test with the variable cloudwatch_endpoint, so I have to think about the solution.
Terraform can't decode json directly before 0.11.x. But there is a workaround to work on nested lists.
Your ruby need be adjusted to make output as variable http below, then you should be fine to get what you need.
$ cat main.tf
variable "http" {
type = "list"
default = [{componentID = "k8QEbeuHdDnU", name = "Jenkins"}]
}
output "http" {
value = "${lookup(var.http[0], "componentID")}"
}
$ terraform apply
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Outputs:
http = k8QEbeuHdDnU
Using HL7.FHIR.STU3.Core, I am getting an invalid cast exception when I try and parse an PlanDefinition FHIR file.
Do I need to set the Schema for PlanDefinition file?
string HL7FilePath = string.Format("{0}\\{1}", System.IO.Directory.GetCurrentDirectory(), "ANA3.xml");
string HL7FileData = File.ReadAllText(HL7FilePath)
var b = new FhirXmlParser().Parse<Bundle>(HL7FileData);
Error
InValidCastException {"Unable to cast object of type 'Hl7.Fhir.Model.PlanDefinition' to type 'Hl7.Fhir.Model.Bundle'."}
You are trying to parse a PlanDefinition resource into a Bundle object, as the InvalidCastException tells you. If you change the Parse<Bundle> into Parse<PlanDefinition> your code should work fine.
I am trying to convert yyyyMMdd format to yyyy/MM/dd format using pig for that i have written below code.
Code:
STOCK_A = LOAD '/user/root/xxxx/*' USING PigStorage('|');
data = FILTER STOCK_A BY ($1 matches '.*ID.*');
MSH_DATA = FOREACH data GENERATE ToDate($8,'yyyy/MM/dd','UTC') AS dob;
When i am trying to dump the result i am getting below error.
ERROR org.apache.pig.tools.pigstats.SimplePigStats - ERROR 0:
Exception while executing [POUserFunc (Name:
POUserFunc(org.apache.pig.builtin.ToDate3ARGS)[datetime] - scope-209
Operator Key: scope-209) children: null at []]:
java.lang.IllegalArgumentException: Invalid format: "19690321" is too
short
Sample:
EXVORV##PDULD21F|ID|1|483|1020783||EXVORV##PDULD||19690321|F|
$8 seems valid to me i am not able to locate the reason the issue is coming. Any help would be really appreciated.
You use :
ToDate($8,'yyyy/MM/dd','UTC')
but the format is
19690321
so you should have
ToDate($8,'yyyyMMdd','UTC')
The issue is most likely because of the load statement.Since you are not specifying the schema the datatype by default will be bytearray. You will have to convert it to chararray before passing the field to ToDate
STOCK_A = LOAD '/user/root/xxxx/*' USING PigStorage('|');
data = FILTER STOCK_A BY ($1 matches '.*ID.*');
MSH_DATA = FOREACH data GENERATE ToDate((chararray)$8,'yyyy/MM/dd','UTC') AS dob;
When I am injecting data collected by Fluentd to Elasticsearch using fluent-plugin-elasticsearch, some data caused the following error:
2017-04-09 23:47:37 +0900 [error]: Could not push log to Elasticsearch: {"took"=>3, "errors"=>true, "items"=>[{"index"=>{"_index"=>"logstash-201704", "_type"=>"ruby", "_id"=>"AVtTLz_cUzkwT9CQCxrH", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse [message]", "caused_by"=>{"type"=>"illegal_state_exception", "reason"=>"Can't get text on a START_OBJECT at 1:27"}}}}, .....]}
It seems that elasticsearch banned the data for error failed to parse [message] and Can't get text on a START_OBJECT at 1:27. but I cannot see what data is sent to Elasticsearch and what's wrong.
Any ideas?
fluent-plugin-elasticsearch uses _bulk API to sending data. I put the request-dumping code on /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/elasticsearch-api-5.0.4/lib/elasticsearch/api/actions/bulk.rb as following:
def bulk(arguments={})
...
payload = body
end
$log.info([method, path, params, payload].inspect) # <=== here ($log is global logger of fluentd)
perform_request(method, path, params, payload).body
And I found the request sent to Elasticsearch was as following:
POST /_bulk
{"index":{"_index":"logstash-201704","_type":"ruby"}}
{"level":"INFO","message":{"status":200,"time":{"total":46.26,"db":33.88,"view":12.38},"method":"PUT","path":"filtered","params":{"time":3815.904,"chapter_index":0},"response":[{}]},"node":"main","time":"2017-04-09T14:39:06UTC","tag":"filtered.console","#timestamp":"2017-04-09T23:39:06+09:00"}
The problem is message field contains JSON object, although this field is mapped as analyzed string on Elasticsearch.
I'm switching to the MongoDB Java driver version 3. I cannot figure out how to perform an update of a Document. For example, I want to change the "age" of an user:
MongoDatabase db = mongoClient.getDatabase("exampledb");
MongoCollection<org.bson.Document> coll = db.getCollection("collusers");
Document doc1 = new Document("name", "frank").append("age", 55) .append("phone", "123-456-789");
Document doc2 = new Document("name", "frank").append("age", 33) .append("phone", "123-456-789");
coll.updateOne(doc1, doc2);
The output is:
java.lang.IllegalArgumentException: Invalid BSON field name name
Any idea how to fix it ?
Thanks!
Use:
coll.updateOne(eq("name", "frank"), new Document("$set", new Document("age", 33)));
for updating the first Document found. For multiple updates:
coll.updateMany(eq("name", "frank"), new Document("$set", new Document("age", 33)));
On this link, you can fine a quick reference to MongoDB Java 3 Driver
in Mongodb Java driver 3.0 , when you update a document, you can call the coll.replaceOne method to replace document, or call the coll.updateOne / coll.updateMany method to update document(s) by using $set/$setOnInsert/etc operators.
in your case, you can try:
coll.updateOne(eq("name", "frank"), new Document("$set", new Document("age", 33)));
coll.replaceOne(eq("name", "frank"), new Document("age", 33));
You can try this
coll.findOneAndReplace(doc1, doc2);