File Download through Rest Call after querying from MongoDB and creating a csv Through Java - mongodb-java

Mongo Collection has the following data...
{ "_id" : "Sims", "count" : 32 }
{ "_id" : "Autumn", "count" : 35 }
{ "_id" : "Becker", "count" : 35 }
{ "_id" : "Cecile", "count" : 40 }
{ "_id" : "Poole", "count" : 32 }
{ "_id" : "Nanette", "count" : 31 }
Through rest call, taking id from the url,can i query From mongo DB through Java and Download the csv file containing all the data of id1?
What can be the API's used for File Download.And the file is not located on some server.
Flow should be as follows:
http://localhost:8080/Application/Poole
Now my Java code would be something like
MongoClient mongoClient = new MongoClient(“localhost”,27017);
MongoDatabase mongoDatabase = mongoClient.getDatabase(test123);
MongoCollection mongoCollection = mongoDatabase.getCollection(testcoll);
mongoCollection.find(id);
The query returned should write the result to CSV File and the file download should happen.
Also any ApI to convert the result of Mongo query result to csv file

Best solution could be that you use the plugin for saving a file and write a custom code which will convert you JSON data (coming from Mongodb collection) to csv. I have made a code for myself but its on Javascript. Let me know if you need it. Thanks.

Related

Elastic Search : Bulk update using Rest Client throws java.io.IOException: An existing connection was forcibly closed by the remote host

I am trying to bulk update few fields in my documents using Rest Client. Here is how my code looks like :
String updateLastUsedTimeInBulkPath = "/_bulk"; // Tried with just "_bulk" also
StringEntity updateLastUsedTimeRequest = new StringEntity(jsonStringToUpdateLastUsedTimeInBulk);
updateLastUsedTimeRequest.setContentType("application/x-ndjson");
elasticServerRestClient.performRequest("POST", updateLastUsedTimeInBulkPath, Collections.<String, String>emptyMap(), updateLastUsedTimeRequest);
jsonStringToUpdateLastUsedTimeInBulk in the above code looks like the following :
{ "update" : {"_index" : "routematch", "_type" : "routematch", "_id" : "2686101,8264892"} }
{ "doc" : {"lastUsedTime" : "1519221900208"} }
{ "update" : {"_index" : "routematch", "_type" : "routematch", "_id" : "2686101,2686101"} }
{ "doc" : {"lastUsedTime" : "1519221900209"} }
I tried to execute the same using Kibana and it worked fine and updated the fields, but it always fails when executed using RestClient with java.io.IOException: An existing connection was forcibly closed by the remote host
What is wrong with my code? I have tried the same way to index documents in bulk and it works fine with RestClient also. Why is update failing?

How to insert an element into already present list in elastic search

Say I have documents stored like below.
document 1
{
id : '1',
title : "This is a test document1",
list : ["value1" , "value2"],
...
}
document 2
{
id : '2',
title : "This is a test document2",
valueList : ["value1" , "value2"],
...
}
I need to add some more elements to the valueList in the documents with a list of document ids using bulk api. The resulting should look like
document 1
{
id : '1',
title : "This is a test document1",
list : ["value1" , "value2", "value3"],
...
}
document 2
{
id : '2',
title : "This is a test document2",
valueList : ["value1" , "value2" , "value3"],
...
}
What can I do to achieve this?
I tried using the scripts but it only updates a single document.
Sorry am really new to elastic search. I could even be stupid on this question. Please forgive and make me clear with this question.
See Updating Document. It should be straightforward. You need to use _update and just to give you an idea, even though the documentation is nearly perfect, it could look like this:
POST /your_index/your_type/document1/_update
{
id : '1',
title : "This is a test document1",
list : ["value1" , "value2", "value3"]
}
This will update document1.
In case of bulk updates you should read Batch Processing and have a look at the Bulk API.
From the docs:
POST /your_index/your_type/_bulk
{ "update" : {"_id" : "document1", "_type" : "your_type", "_index" : "your_index"}}
{ "doc" : {"myfield" : "newvalue"} }
{ "update" : {"_id" : "document2", "_type" : "your_type", "_index" : "your_index"}}
{ "doc" : {"myfield" : "newvalue"} }
Please note that you can just use _update for Partial Updates.
The simplest form of the update request accepts a partial document as
the doc parameter, which just gets merged with the existing document.
Objects are merged together, existing scalar fields are overwritten,
and new fields are added.

Bulk indexing using elastic search

Till now i was indexing data to elastic document by document and now as the data started increasing it has become very slow and not an optimized approach. So i was searching for a bulk insert thing and found Elastic Bulk API. From the documents in their official site i got confused. The approach i am using is by passing the data as WebRequest and executing them in the elastic server. So while creating a batch/bulk insert request the API wants us to form a template like
localhost:9200/_bulk as URL and
{ "index" : { "_index" : "test", "_type" : "type1", "_id" : "1" } }
{ "field1" : "value1" }
to index a document with id 1 and field1 values as value 1. Also the API suggests to send the data as JSON (unpretty, to maintain a non escaping character or so). So to pass multiple document with multiple properties how can i structure my data.
I tried like this in FF RestClient , with POST and header as JSON , but RestClient is throwing some error and i know its not a valid JSON
{ "index" : { "_index" : "indexName", "_type" : "type1", "_id" : "111" },
{ "Name" : "CHRIS","Age" : "23" },"Gender" : "M"}
Your data is not well-formed:
You don't need the comma after the first line
You're missing a closing } on the first line
You have a closing } in the middle of your second line you need to remove it as well.
The correct way of formatting your data for a bulk insert look like this:
curl -XPOST localhost:9200/_bulk -d '
{ "index" : { "_index" : "indexName", "_type" : "type1", "_id" : "111" }}
{ "Name" : "CHRIS","Age" : "23" ,"Gender" : "M"}
-H 'Content-Type: application/x-ndjson'
This will work.
UPDATE
Using Postman on Chrome it looks like this. Make sure to add a new line after line 2:
Using the elasticsearch 7.9.2
In order to send the bulk update I was getting the error of new line as below
Failed update without new line
This is wierd but after adding the new line in the last of the all the operations it is working fine with postman, notice line number 5 in below screenshot
bulk update success after adding newline in last of all the commands in postman

How to perform Sum on a Map Key in the Mongo DB document within Spring

My MongoDB document looks something like as following:
{
"_class" : "com.foo.foo.FooClass",
"_id" : ObjectId("5441948f3004e65fbda72d9c"),
"actionType" : "LOGIN",
"actor" : "bolt",
"extraDataMap" : {
"workHours" : NumberLong(11869)
},
}
Where extraDataMap is a HashMap stored from the java code. I have to get all the documents where "actionType" is "Login", group on "actor" and sum all the "workHours" for those individual actors
If I do below query on MongoDB directly it works:
db.activityLog.aggregate([
{$match : { actionType : "LOGIN" }},
{$group : { "_id" : "$actor", "hours" : { "$sum" : "$extraDataMap.workHours" } } },
{$sort : {_id : 1}}
]);
But If I run the query from Java Code
TypedAggregation<ActivityLog> agg = Aggregation.newAggregation(ActivityLog.class,
buildCriteria(),
group("actor").sum("extraDataMap.workHours").as("hours"),
sort(Sort.Direction.ASC, MongoActivityLogRepository.DOCUMENT_ID_FIELD_NAME)
);
AggregationResults<ActivityLog> result = mongoOperations.aggregate(agg, ActivityLog.class);
List<ActivityLog> results = result.getMappedResults();
It gives below error:
Caused by: org.springframework.data.mapping.PropertyReferenceException: No property work found for type java.lang.String
at org.springframework.data.mapping.PropertyPath.<init>(PropertyPath.java:75)
at org.springframework.data.mapping.PropertyPath.create(PropertyPath.java:327)
at org.springframework.data.mapping.PropertyPath.create(PropertyPath.java:353)
at org.springframework.data.mapping.PropertyPath.create(PropertyPath.java:307)
at org.springframework.data.mapping.PropertyPath.create(PropertyPath.java:290)
at org.springframework.data.mapping.PropertyPath.from(PropertyPath.java:274)
at org.springframework.data.mapping.PropertyPath.from(PropertyPath.java:245)
at org.springframework.data.mongodb.core.aggregation.TypeBasedAggregationOperationContext.getReference(TypeBasedAggregationOperationContext.java:91)
at org.springframework.data.mongodb.core.aggregation.GroupOperation$Operation.getValue(GroupOperation.java:359)
at org.springframework.data.mongodb.core.aggregation.GroupOperation$Operation.toDBObject(GroupOperation.java:355)
at org.springframework.data.mongodb.core.aggregation.GroupOperation.toDBObject(GroupOperation.java:300)
at org.springframework.data.mongodb.core.aggregation.Aggregation.toDbObject(Aggregation.java:228)
at org.springframework.data.mongodb.core.MongoTemplate.aggregate(MongoTemplate.java:1287)
at org.springframework.data.mongodb.core.MongoTemplate.aggregate(MongoTemplate.java:1264)
at org.springframework.data.mongodb.core.MongoTemplate.aggregate(MongoTemplate.java:1253)
Really appreciate all the prompt responses :)
I had the same problem than you and I found this solution
Instead of using TypedAggregation, use a plain Aggregation. This way, spring data won't perform a type checking.
It would be as follows:
Aggregation agg = Aggregation.newAggregation(
buildCriteria(),
group("actor").sum("extraDataMap.workHours").as("hours"),
sort(Sort.Direction.ASC, MongoActivityLogRepository.DOCUMENT_ID_FIELD_NAME)
);
List<ActivityLog> results = mongoOperations.aggregate(agg, mongoOperations.getCollectionName(ActivityLog.class), ActivityLog.class).getMappedResults();
See that I used a different mongoOperations.aggregate signature, because since we are not using a TypedAggregation, we have to indicate over which collection we are performing the aggregation.
I hope this helps you.

Check if value exists in mongodb

I'm trying to write a method that will check if a user_id is in the DB and then operate on the user_id.The portion of code that I'm stuck on is this:
DBObject query = new BasicDBObject(user_id,new BasicDBObject("$regex", true));
DBCursor result = dBcollection.find(query);
if (result.equals("true")) {
System.out.println("found");
//do stuff
}
else{
//do other stuff
}
My database is set up this way:
{ "_id" : { "$oid" : "53b4443ad121894f16ea3699"} , "user_id" : "1683777896" , "countries" : { "JA" : 1}}
{ "_id" : { "$oid" : "53b4443ad121894f16ea369a"} , "user_id" : "453121657" , "countries" : { "TU" : 1}}
I want to be able to query on the user_id and then operate on the record associated with that user_id but I can't figure out the correct syntax in java for the "if" statement.
Use this to query for the document with that user id:
DBObject query = new BasicDBObject("user_id", user_id);
Use this for the if statement to determine if the find actually found that user id:
if (result.hasNext())
If that doesn't go into the if statement, show us the code that gets the database and dBcollection, and make sure you are connecting to the right database and collection name.

Resources