Elastic search sink connector need to read multiple topics nearly more than 1000 from kafka needs to write elastic search - apache-kafka-connect

I am new to kafka connect.
I have a requirement, where it needs to read topics from kafka dynamically and needs to write to elastic search.
Is there any way to achieve it?
Is there any way to use like topics with patterns app*,test*(actual topic will apps-logging, app-location1, app-service)(test-app,test-app2)
Sample configuration which I used, it's writing index location.service-2020.06.11, If I want include other topics with wild card as mentioned above( I don't want write each topic name) how can I achieve it.
curl -X POST http://localhost:8083/connectors -H "Content-Type: application/json" -d '{
"name" : "kafka-elastic-test-01",
"config" : {
"connector.class" : "io.confluent.connect.elasticsearch.ElasticsearchSinkConnector",
"connection.url" : "https://localhost:9200",
"connection.username": "admin",
"connection.password":"******",
"key.converter" : "org.apache.kafka.connect.json.JsonConverter",
"type.name" : "_doc",
"topics.regex": "location.service*",
"key.ignore" : "true",
"schema.ignore" : "true",
"transforms": "TimestampRouter",
"transforms.TimestampRouter.type": "org.apache.kafka.connect.transforms.TimestampRouter",
"transforms.TimestampRouter.topic.format": "${topic}-${timestamp}",
"transforms.TimestampRouter.timestamp.format": "yyyy.MM.dd"
}
}'
Edit on June 9 2020
Thanks for the reply #Iskuskov Alexander
I tried your suggestion, Here is the output. Any of your suggestions are welcome.
curl -X POST http://localhost:8083/connectors -H "Content-Type: application/json" -d '{
"name" : "kafka-elastic-test-03",
"config" : {
"connector.class" : "io.confluent.connect.elasticsearch.ElasticsearchSinkConnector",
"connection.url" : "http://localhost:9200",
"connection.username": "admin",
"connection.password":"******",
"key.converter" : "org.apache.kafka.connect.json.JsonConverter",
"type.name" : "_doc",
"topics.regex": "(app|test).*",
"key.ignore" : "true",
"schema.ignore" : "true",
"transforms": "TimestampRouter",
"transforms.TimestampRouter.type": "org.apache.kafka.connect.transforms.TimestampRouter",
"transforms.TimestampRouter.topic.format": "${topic}-${timestamp}",
"transforms.TimestampRouter.timestamp.format": "yyyy.MM.dd"
}
}'
curl -w '\n' 'http://localhost:8083/connectors/kafka-elastic-test-03/status' {
"name": "kafka-elastic-test-03",
"connector": {
"state": "RUNNING",
"worker_id":"xxxx:8083"
},
"tasks": [{
"id":0,
"state":"FAILED",
"worker_id":"xxxx:8083",
"trace":
"org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler\n\t
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:178)\n\t
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:104)\n\t
at org.apache.kafka.connect.runtime.WorkerSinkTask.convertAndTransformRecord(WorkerSinkTask.java:488)\n\t
at org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:465)\n\t
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:321)\n\t
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:224)\n\t
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:192)\n\t
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177)\n\t
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227)\n\t
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)\n\t
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)\n\t
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\t
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\t
at java.base/java.lang.Thread.run(Thread.java:834)\n
Caused by: org.apache.kafka.connect.errors.DataException: Converting byte[] to Kafka Connect data failed due to serialization error: \n\t
at org.apache.kafka.connect.json.JsonConverter.toConnectData(JsonConverter.java:355)\n\t
at org.apache.kafka.connect.storage.Converter.toConnectData(Converter.java:86)\n\t
at org.apache.kafka.connect.runtime.WorkerSinkTask.lambda$convertAndTransformRecord$2(WorkerSinkTask.java:488)\n\t
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:128)\n\t
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:162)\n\t
... 13 more\n
Caused by: org.apache.kafka.common.errors.SerializationException: com.fasterxml.jackson.core.io.JsonEOFException: Unexpected end-of-input: expected close marker for Object (start marker at [Source: (byte[])\"{\"; line: 1, column: 1])\n
at [Source: (byte[])\"{\"; line: 1, column: 3]\n
Caused by: com.fasterxml.jackson.core.io.JsonEOFException: Unexpected end-of-input: expected close marker for Object (start marker at [Source: (byte[])\"{\"; line: 1, column: 1])\n
at [Source: (byte[])\"{\"; line: 1, column: 3]\n\t
at com.fasterxml.jackson.core.base.ParserMinimalBase._reportInvalidEOF(ParserMinimalBase.java:618)\n\t
at com.fasterxml.jackson.core.base.ParserBase._handleEOF(ParserBase.java:485)\n\t
at com.fasterxml.jackson.core.base.ParserBase._eofAsNextChar(ParserBase.java:497)\n\t
at com.fasterxml.jackson.core.json.UTF8StreamJsonParser._skipWSOrEnd(UTF8StreamJsonParser.java:2933)\n\t
at com.fasterxml.jackson.core.json.UTF8StreamJsonParser.nextFieldName(UTF8StreamJsonParser.java:964)\n\t
at com.fasterxml.jackson.databind.deser.std.BaseNodeDeserializer.deserializeObject(JsonNodeDeserializer.java:246)\n\t
at com.fasterxml.jackson.databind.deser.std.JsonNodeDeserializer.deserialize(JsonNodeDeserializer.java:68)\n\t
at com.fasterxml.jackson.databind.deser.std.JsonNodeDeserializer.deserialize(JsonNodeDeserializer.java:15)\n\t
at com.fasterxml.jackson.databind.ObjectMapper._readTreeAndClose(ObjectMapper.java:4057)\n\t
at com.fasterxml.jackson.databind.ObjectMapper.readTree(ObjectMapper.java:2572)\n\t
at org.apache.kafka.connect.json.JsonDeserializer.deserialize(JsonDeserializer.java:58)\n\t
at org.apache.kafka.connect.json.JsonConverter.toConnectData(JsonConverter.java:353)\n\t
at org.apache.kafka.connect.storage.Converter.toConnectData(Converter.java:86)\n\t
at org.apache.kafka.connect.runtime.WorkerSinkTask.lambda$convertAndTransformRecord$2(WorkerSinkTask.java:488)\n\t
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:128)\n\t
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:162)\n\t
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:104)\n\t
at org.apache.kafka.connect.runtime.WorkerSinkTask.convertAndTransformRecord(WorkerSinkTask.java:488)\n\t
at org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:465)\n\t
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:321)\n\t
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:224)\n\t
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:192)\n\t
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177)\n\t
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227)\n\t
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)\n\t
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)\n\t
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\t
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\t
at java.base/java.lang.Thread.run(Thread.java:834)\n"
}],
"type": "sink"
}

Yes, you can use topics.regex parameter of Kafka Connect Sink Configurations.
In your case it could be looks like: "topics.regex": "(app|test).*"

Related

Postman gives me the following error: "error": "no handler found for uri [/megacorp/employee/1] and method [PUT]"

I am just starting with Elasticsearch and I have started with adding an index, which works and I can get information about it:
GET http://localhost:9200/megacorp
"megacorp": {
"aliases": {},
"mappings": {},
"settings": {
"index": {
"routing": {
"allocation": {
"include": {"_tier_preference": "data_content"
}
}
},
"number_of_shards": "1",
"provided_name": "megacorp",
"creation_date": "1657286196414",
"number_of_replicas": "1",
"uuid": "HbsAAv-mRziSUKGiXPMyPA",
"version": {
"created": "8030299"
The problem comes when I try to add a document, I get the following error:
PUT http://localhost:9200/megacorp/empoyee/1
"first_name": "John",
"last_name": "Smith",
"age": 25,
"about": "I love to go rock climbing",
"interests": ["sports", "music"]
"error": "no handler found for uri [/megacorp/empoyee/1] and method [PUT]"
I think I've done everything right, but it still does not work.
Problem here, that you are using wrong URL in your request.
According to documentation you must use use following paths for document index:
Request
PUT /<target>/_doc/<_id>
POST /<target>/_doc/
PUT /<target>/_create/<_id>
POST /<target>/_create/<_id>
So you are missing _doc or _create part.
UPDATE:
cURL Example
curl -X PUT --location "http://localhost:9200/megacorp/1" \
-H "Content-Type: application/json" \
-d "{
\"first_name\": \"John\",
\"last_name\": \"Smith\",
\"age\": 25,
\"about\": \"I love to go rock climbing\",
\"interests\": [\"sports\", \"music\"]
}"
From elasticsearch 8.x mapping types are removed. You need to use this for indexing the documents
http://localhost:9200/megacorp/_doc/1

illegal_argument_exception while using kafka connector to push to elasticsearch

I am trying to push logs from kafka tops to elasticsearch.
My message in kafka:
{
"#timestamp": 1589549688.659166,
"log": "13:34:48.658 [pool-2-thread-1] DEBUG health check success",
"stream": "stdout",
"time": "2020-05-15T13:34:48.659166158Z",
"pod_name": "my-pod-789f8c85f4-mt62l",
"namespace_name": "services",
"pod_id": "600ca012-91f5-XXXX-XXXX-XXXXXXXXXXX",
"host": "ip-192-168-88-59.ap-south-1.compute.internal",
"container_name": "my-pod",
"docker_id": "XXXXXXXXXXXXXXXXX1435bb2870bfc9d20deb2c483ce07f8e71ec",
"container_hash": "myregistry",
"labelpod-template-hash": "9tignfe9r",
"labelsecurity.istio.io/tlsMode": "istio",
"labelservice": "my-pod",
"labelservice.istio.io/canonical-name": "my-pod",
"labelservice.istio.io/canonical-revision": "latest",
"labeltype": "my-pod",
"annotationkubernetes.io/psp": "eks.privileged",
"annotationsidecar.istio.io/status": "{\"version\":\"58dc8b12bb311f1e2f46fd56abfe876ac96a38d7ac3fc6581af3598ccca7522f\"}"
}
This is my connector config:
{
"name": "logs",
"config": {
"connector.class": "io.confluent.connect.elasticsearch.ElasticsearchSinkConnector",
"connection.url": "http://es:9200",
"connection.username": "username",
"connection.password": "password",
"tasks.max": "10",
"topics": "my-pod",
"name": "logs",
"type.name": "_doc",
"schema.ignore": "true",
"key.ignore": "true",
"value.converter": "org.apache.kafka.connect.json.JsonConverter",
"value.converter.schemas.enable": "false",
"transforms": "routeTS",
"transforms.routeTS.type": "org.apache.kafka.connect.transforms.TimestampRouter",
"transforms.routeTS.topic.format": "${topic}-${timestamp}",
"transforms.routeTS.timestamp.format": "YYYYMMDD"
}
}
This is the error i'm getting
cp-kafka-connect-server [2020-05-15 13:30:59,083] WARN Failed to execute batch 4830 of 18 records with attempt 4/6, will attempt retry after 539 ms. Failure reason: Bulk request failed: [{"type":"illegal_argument_exception","reason":"mapper [labelservice] of different type, current_type [text], merged_type [ObjectMapper]"}
I haven't created any mapping beforehand. I'm depending on the connector to create the index.
This is the mapping I have in es which is autocreated.
{
"mapping": {}
}
The error message is clear
reason":"mapper [labelservice] of different type, current_type [text],
merged_type [ObjectMapper]"
It means in your index mapping labelservice is defined as text but you are sending below data in labelservice field:
"labelservice": "my-pod",
"labelservice.istio.io/canonical-name": "my-pod",
"labelservice.istio.io/canonical-revision": "latest",
This is the format of object type in Elasticsearch, now there is a mismatch in the data-type which caused the error message.
You need to change your mapping and define labelservice as object to make it work. Refer object datatype in Elasticsearch for more info.

ElasticSearch parse error about "compressor detection" POSTing to /_bulk

I am trying to put work on Elastic Search but I have some problem formatting my JSON file and posting it to localhost.
My JSON file has this structure:
{"datasetid": "dataset1", "recordid": "01fc9ae28dd02cd94c97fc759cc0fe9a7b640a3b", "fields":{"movie":"Star Wars", "emplacement":"USA", "movie_id":"40"}, "record_timestamp": "2019-02-08T11:51:00+01:00"}, {"datasetid": "dataset1", "recordid":"906117d0d489f38218df8e01cb228c217c050ce2", "fields": {"movie":"James Bond", "emplacement":"USA", "movie_id":"41"}, "record_timestamp":"2019-02-08T11:51:00+01:00"}
It has more iterations, but they follow this structure.
From what I have found by searching on the Internet, I made this line of command:
<sirene_v3.json jq -c '. | {"index": {"_index": "json", "_type": "json"}}, .' \
| curl -XPOST localhost:9200/_bulk -H "Content-Type: application/json" --data-binary #-
But I got this error and I have no idea what is going wrong in here:
{
"took": 2,
"errors": true,
"items": [
{
"index": {
"_index": "json",
"_type": "json",
"_id": "_IOQumkBjKtepv9oHnVg",
"status": 400,
"error": {
"type": "mapper_parsing_exception",
"reason": "failed to parse",
"caused_by": {
"type": "not_x_content_exception",
"reason": "Compressor detection can only be called on some xcontent bytes or compressed xcontent bytes"
}
}
}
}
]
}
Does anyone have any ideas about it?
Thank you in advance

ExtractField and Parse JSON in kafka-connect sink

I have a kafka-connect flow of mongodb->kafka connect->elasticsearch sending data end to end OK, but the payload document is JSON encoded. Here's my source mongodb document.
{
"_id": "1541527535911",
"enabled": true,
"price": 15.99,
"style": {
"color": "blue"
},
"tags": [
"shirt",
"summer"
]
}
And here's my mongodb source connector configuration:
{
"name": "redacted",
"config": {
"connector.class": "com.teambition.kafka.connect.mongo.source.MongoSourceConnector",
"databases": "redacted.redacted",
"initial.import": "true",
"topic.prefix": "redacted",
"tasks.max": "8",
"batch.size": "1",
"key.serializer": "org.apache.kafka.common.serialization.StringSerializer",
"value.serializer": "org.apache.kafka.common.serialization.JSONSerializer",
"key.serializer.schemas.enable": false,
"value.serializer.schemas.enable": false,
"compression.type": "none",
"mongo.uri": "mongodb://redacted:27017/redacted",
"analyze.schema": false,
"schema.name": "__unused__",
"transforms": "RenameTopic",
"transforms.RenameTopic.type":
"org.apache.kafka.connect.transforms.RegexRouter",
"transforms.RenameTopic.regex": "redacted.redacted_Redacted",
"transforms.RenameTopic.replacement": "redacted"
}
}
Over in elasticsearch, it ends up looking like this:
{
"_index" : "redacted",
"_type" : "kafka-connect",
"_id" : "{\"schema\":{\"type\":\"string\",\"optional\":true},\"payload\":\"1541527535911\"}",
"_score" : 1.0,
"_source" : {
"ts" : 1541527536,
"inc" : 2,
"id" : "1541527535911",
"database" : "redacted",
"op" : "i",
"object" : "{ \"_id\" : \"1541527535911\", \"price\" : 15.99,
\"enabled\" : true, \"tags\" : [\"shirt\", \"summer\"],
\"style\" : { \"color\" : \"blue\" } }"
}
}
I'd like to do use 2 single message transforms:
ExtractField to grab object, which is a string of JSON
Something to parse that JSON into an object or just let the normal JSONConverter handle it, as long as it ends up as properly structured in elasticsearch.
I've attempted to do it with just ExtractField in my sink config, but I see this error logged by kafka
kafka-connect_1 | org.apache.kafka.connect.errors.ConnectException:
Bulk request failed: [{"type":"mapper_parsing_exception",
"reason":"failed to parse",
"caused_by":{"type":"not_x_content_exception",
"reason":"Compressor detection can only be called on some xcontent bytes or
compressed xcontent bytes"}}]
Here's my elasticsearch sink connector configuration. In this version, I have things working but I had to code a custom ParseJson SMT. It's working well, but if there's a better way or a way to do this with some combination of built-in stuff (converters, SMTs, whatever works), I'd love to see that.
{
"name": "redacted",
"config": {
"connector.class":
"io.confluent.connect.elasticsearch.ElasticsearchSinkConnector",
"batch.size": 1,
"connection.url": "http://redacted:9200",
"key.converter.schemas.enable": true,
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
"schema.ignore": true,
"tasks.max": "1",
"topics": "redacted",
"transforms": "ExtractFieldPayload,ExtractFieldObject,ParseJson,ReplaceId",
"transforms.ExtractFieldPayload.type": "org.apache.kafka.connect.transforms.ExtractField$Value",
"transforms.ExtractFieldPayload.field": "payload",
"transforms.ExtractFieldObject.type": "org.apache.kafka.connect.transforms.ExtractField$Value",
"transforms.ExtractFieldObject.field": "object",
"transforms.ParseJson.type": "reaction.kafka.connect.transforms.ParseJson",
"transforms.ReplaceId.type": "org.apache.kafka.connect.transforms.ReplaceField$Value",
"transforms.ReplaceId.renames": "_id:id",
"type.name": "kafka-connect",
"value.converter": "org.apache.kafka.connect.json.JsonConverter",
"value.converter.schemas.enable": false
}
}
I am not sure about your Mongo connector. I don't recognize the class or the configurations... Most people probably use Debezium Mongo connector
I would setup this way, though
"connector.class": "com.teambition.kafka.connect.mongo.source.MongoSourceConnector",
"key.serializer": "org.apache.kafka.common.serialization.StringSerializer",
"value.serializer": "org.apache.kafka.common.serialization.JSONSerializer",
"key.serializer.schemas.enable": false,
"value.serializer.schemas.enable": true,
The schemas.enable is important, that way the internal Connect data classes can know how to convert to/from other formats.
Then, in the Sink, you again need to use JSON DeSerializer (via the converter) so that it creates a full object rather than a plaintext string, as you see in Elasticsearch ({\"schema\":{\"type\":\"string\").
"connector.class":
"io.confluent.connect.elasticsearch.ElasticsearchSinkConnector",
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
"key.converter.schemas.enable": false,
"value.converter": "org.apache.kafka.connect.json.JsonConverter",
"value.converter.schemas.enable": true
And if this doesn't work, then you might have to manually create your index mapping in Elasticsearch ahead of time so it knows how to actually parse the strings you are sending it

Getting "illegal_argument_exception" error while loading dataset in Elasticsearch

I am getting started with elastic search, I am trying to load the JSON dataset using the _bulk method. But I am getting the below error.
{
"error" : {
"root_cause" : [ {
"type" : "illegal_argument_exception",
"reason" : "Malformed action/metadata line [1], expected START_OBJECT or END_OBJECT but found [VALUE_NUMBER]"
} ],
"type" : "illegal_argument_exception",
"reason" : "Malformed action/metadata line [1], expected START_OBJECT or END_OBJECT but found [VALUE_NUMBER]"
},
"status" : 400
}
Seems like some issue with my JSON file, I validated the JSON and it seems to be okay.
Here is my sample file.
{
"id": 3,
"customer_number": "",
"last_name": "anon",
"first_name": "zin",
"email": "anon#xyz.com",
"phone_number": "409-860-9006 x109",
"registered_at": "2007-05-02T16:27:50.74-05:00",
"last_visit_at": "2014-07-18T11:06:15-05:00",
"adcode": "",
"adcode_id": 0,
"affiliate_id": null,
"customer_type_id": 0,
"is_no_tax_customer": true,
"comments": "a",
"store_id": 5,
"source": "",
"search_string": "",
"no_account": false,
"sales_person": "SSB",
"alternate_phone_number": "800-936-9006 x109",
"is_affiliate_customer": false,
"updated_at": "2014-06-30T18:34:11.043-05:00",
"created_at": "2007-05-02T16:27:50.74-05:00",
"username": "",
"is_contact_information_only": false,
"tax_exemption_number": "",
"company": "anon",
"source_group": "",
"store_payment_methods_enabled": [0],
}
And the statement used to post the data is mentioned below.
curl -XPOST 'localhost:9200/customer/_bulk?pretty' --data-binary "#account_sample.json"
Can anyone please help me out with this?
Just remove the last comma from the file:
{
"id": 3,
"customer_number": "",
"last_name": "anon",
"first_name": "zin",
"email": "anon#xyz.com",
"phone_number": "409-860-9006 x109",
"registered_at": "2007-05-02T16:27:50.74-05:00",
"last_visit_at": "2014-07-18T11:06:15-05:00",
"adcode": "",
"adcode_id": 0,
"affiliate_id": null,
"customer_type_id": 0,
"is_no_tax_customer": true,
"comments": "a",
"store_id": 5,
"source": "",
"search_string": "",
"no_account": false,
"sales_person": "SSB",
"alternate_phone_number": "800-936-9006 x109",
"is_affiliate_customer": false,
"updated_at": "2014-06-30T18:34:11.043-05:00",
"created_at": "2007-05-02T16:27:50.74-05:00",
"username": "",
"is_contact_information_only": false,
"tax_exemption_number": "",
"company": "anon",
"source_group": "",
"store_payment_methods_enabled": [0]
}
your document was not valid json.
As mentioned by the error reason, it is expecting START_OBJECT or END_OBJECT
Also please note that your JSON is incorrect due to the presence of an extra comma at the end of the last field "store_payment_methods_enabled": [0],.Another point which you should consider is you cannot have endline characters within the JSON, the entire JSON should be one line.
Even though you might have a valid JSON, if you don't provide the kind of operation you wish to perform (START_OBJECT in this case), you will get this error.
You might have to change the contents of your input file account_sample.json to below:
{ "index" : { "_index" : "someindex", "_id" : "1" } }
{"id": 3,"customer_number": "","last_name": "anon","first_name": "zin","email": "anon#xyz.com","phone_number": "409-860-9006 x109","registered_at": "2007-05-02T16:27:50.74-05:00","last_visit_at": "2014-07-18T11:06:15-05:00","adcode": "","adcode_id": 0,"affiliate_id": null,"customer_type_id": 0,"is_no_tax_customer": true,"comments": "a","store_id": 5,"source": "","search_string": "","no_account": false,"sales_person": "SSB","alternate_phone_number": "800-936-9006 x109","is_affiliate_customer": false,"updated_at": "2014-06-30T18:34:11.043-05:00","created_at": "2007-05-02T16:27:50.74-05:00","username": "","is_contact_information_only": false,"tax_exemption_number": "","company": "anon","source_group": "","store_payment_methods_enabled": [0]}
Please refer to Elastic Search Bulk API to know more about how these APIs work.

Resources