Spring boot mongodb: Query date as ISODate instead of $date long - spring

I'm building my query like this:
Date date = new Date();
Criteria criteria = Criteria
.where("metadata.value.digitalitzacio.dataDigitalitzacio")
.is(new Date(2018,10,10));
this.mongoTemplate.find(Query.query(criteria));
It builds this query:
Query: { "metadata.value.digitalitzacio.dataDigitalitzacio" : { "$date" : 61499948400000 } }
So, it fails.
It sends query as an $date long, instead of an ISODate.
I mean, metadata.value.digitalitzacio.dataDigitalitzacio is stored as a ISODate into collection:
{
"_id" : "cpd4-175ec7f0-d70f-4b63-a709-69918d98c4f2",
"metadata" : [
{
"user" : "RDOCFO",
"value" : {
"digitalitzacio" : {
"csvDigitalitzacio" : "eeeeeeeeee",
"dataDigitalitzacio" : ISODate("2018-10-10T00:00:00Z"),
"empleatDigitalitzacio" : "empleat-digitalitzacio"
}
}
}
]
}
But it's queried as a $date long. How vould I solve that?

From https://stackoverflow.com/a/30294522/9731186, the following code should work, I didn't tested it though. java.util.Date(int, int, int) is now deprecated.
String string_date = "10-10-2018";
SimpleDateFormat f = new SimpleDateFormat("dd-MM-yyyy");
Date d = new Date();
try {
d = f.parse(string_date);
long milliseconds = d.getTime();
} catch (ParseException ex) {
ex.printStackTrace();
}
Criteria criteria = Criteria
.where("metadata.value.digitalitzacio.dataDigitalitzacio")
.is(d);
}
this.mongoTemplate.find(Query.query(criteria));
}

Related

How to get field values from a query using NEST with Elastic Search

I'm new to elastic search and using (trying to) the NEST Library. I'm writing logs to an index using Serilog Elastic Search Sink. So first consideration is I have no control over the structure that the sink uses, just the structured logging properties that I choose to log.
Anyway, I'm simply trying to run a basic search where I want to return the first X documents from an index. I'm able to get some of the property values back from the query but nothing for any of the fields.
The query is as follows:
var searchResponse = await _elasticClient.SearchAsync<LogsViewModel>(s => s
.Index("webapp-razor-*")
.From(0)
.Size(5)
.Query(q => q.MatchAll()));
I'm guessing the reason I'm returning null for the fields is because the model class is not structured correctly.
Ruuning the console tool within the elastic search portal for a simply GET Request:
An example document returned from this query is below:
{
"_index" : "webapp-razor-2021.05",
"_type" : "_doc",
"_id" : "34v3t43kBwE34t3vJowGRgl",
"_score" : 1.0,
"_source" : {
"#timestamp" : "2021-05-03T20:19:46.9329848+01:00",
"level" : "Information",
"messageTemplate" : "{#LogEventCategory}{#LogEventType}{#LogEventSource}{#LogCountry}{#LogRegion}{#LogCity}{#LogZip}{#LogLatitude}{#LogLongitude}{#LogIsp}{#LogIpAddress}{#LogMobile}{#LogUserId}{#LogUsername}{#LogForename}{#LogSurname}{#LogData}",
"message" : "\"Open Id Connect\"\"User Sign In\"\"WebApp-RAZOR\"\"United Kingdom\"\"England\"\"MyTown\"\"PX27\"\"54.8951\"\"-9.1585\"\"My ISP\"\"123.345.789.180\"\"False\"\"a8vce3vc-8e61-44fc-b142-93ck396ad91ce\"\"joe#email.net\"\"joe#email.net\"\"Bloggs\"\"User with username [joe#email.net] forename [joe#email.net] surname [Bloggs] from IP Address [123.345.789.180] signed into the application [WebApp_RAZOR] Succesfully\"",
"fields" : {
"LogEventCategory" : "Open Id Connect",
"LogEventType" : "User Sign In",
"LogEventSource" : "WebApp-RAZOR",
"LogCountry" : "United Kingdom",
"LogRegion" : "England",
"LogCity" : "MyTown",
"LogZip" : "PX27",
"LogLatitude" : "54.8951",
"LogLongitude" : "-9.1585",
"LogIsp" : "My ISP",
"LogIpAddress" : "123.345.789.180",
"LogMobile" : "False",
"LogUserId" : "a8vce3vc-8e61-44fc-b142-93ck396ad91ce",
"LogUsername" : "joe#email.net",
"LogForename" : "joe#email.net",
"LogSurname" : "Bloggs",
"LogData" : "User with username [joe#email.net] forename [Joe] surname [Bloggs] from IP Address [123.345.789.180] signed into the application [WebApp_RAZOR] Succesfully",
"RequestId" : "0HM8ED1IRB7AK:00000001",
"RequestPath" : "/signin-oidc",
"ConnectionId" : "0HM8ED1IRB7AK",
"MachineName" : "DESKTOP-OS52032",
"MemoryUsage" : 23688592,
"ProcessId" : 26212,
"ProcessName" : "WebApp-RAZOR",
"ThreadId" : 6
}
Sample model class (or part of it)
public class LogsViewModel
{
[JsonProperty("#timestamp")]
public string Timestamp { get; set; }
[JsonProperty("level")]
public string Level { get; set; }
[JsonProperty("fields")]
public Fields Fields { get; set; }
}
public class Fields
{
[JsonProperty("LogEventCategory")]
public string LogEventCategory { get; set; }
// Not all propeties shown here but would be the same principal...
}
Could someone please give me an idea in how to go about this? once I know how to get the values from the fields such as "LogEventCategory" then I should be able to move forward and figure it out. None of the documentation examples for Elastic has worked for me, thanks
After a few days of trial and error, I finally derived a solution in being able to pull the fields of choice from the _source object in the elastic document. There may well be a more optimized approach here so welcome any feedback on the topic.
My first step was to view the structure of a sample document from an index that Serilog is writing to, note in my case I'm not necessarily including all structured log event properties in all log events being written to Elastic i.e. on system startup, I simply don't need details of the user/location etc.
Using the DevTools in the Elastic Portal, I performed a simple GET request:
Great tip from user Russ Cam in the comments above where he advises the advantage in using the NuGet package for Elastic Common Schema .NET which provides some standardization for using Serilog and logging to Elastic from various different apps/sources. Reading the forums it looks to be that Elastic are strongly encouraging us to use a common schema as it will play better when working with charts/metrics/dashboards creation etc.
My WebApp is using .NET Core 5, I've included the code section below used in Program.cs file that shows where I added the reference to the above Elastic Common Schema .NET library. Now because I'm connecting to Elastic Cloud, I have to include the authentication details when building the Elastic client and it took me a few attempts before I figured out how to incorporate this package reference alongside some of the other Elastic Client options:
Program.cs file:
public static void Main(string[] args)
{
var configuration = new ConfigurationBuilder()
.SetBasePath(Directory.GetCurrentDirectory())
.AddJsonFile(path: "appsettings.json", optional: false, reloadOnChange: true)
.Build();
// Credentials used for eleastic cloud logging sink.
var elkUri = configuration.GetSection("ElasticCloud").GetValue<string>("Uri");
var elkUsername = configuration.GetSection("ElasticCloud").GetValue<string>("Username");
var elkPassword = configuration.GetSection("ElasticCloud").GetValue<string>("Password");
var elkApplicationName = configuration.GetSection("ElasticCloud").GetValue<string>("ApplicationName");
Log.Logger = new LoggerConfiguration()
.ReadFrom.Configuration(configuration)
.WriteTo.Elasticsearch(new ElasticsearchSinkOptions(new Uri(elkUri))
{
ModifyConnectionSettings = x => x.BasicAuthentication(elkUsername, elkPassword),
IndexFormat = "webapp-razor-{0:yyyy.MM}",
AutoRegisterTemplate = true,
CustomFormatter = new EcsTextFormatter() // *Elastic Common Schema .NET package ref HERE*
})
.CreateLogger();
var host = CreateHostBuilder(args).Build();
using var scope = host.Services.CreateScope();
var services = scope.ServiceProvider;
string logEventCategory = "WebApp-RAZOR";
string logEventType = "Application Startup";
string logEventSource = "System";
string logData = "";
try
{
// Tested OK 1.5.2021
//throw new Exception(); // Testing only..
logData = "Application Starting Up";
Log.Information(
"{#LogEventCategory}" +
"{#LogEventType}" +
"{#LogEventSource}" +
"{#LogData}",
logEventCategory,
logEventType,
logEventSource,
logData);
host.Run(); // Run the WebHostBuilder.
}
catch (Exception ex)
{
logData = "The Application failed to start correctly.";
// Tested on 08/07/2020
Log.Fatal(ex,
"{#LogEventCategory}" +
"{#LogEventType}" +
"{#LogEventSource}" +
"{#LogData}",
logEventCategory,
logEventType,
logEventSource,
logData);
}
finally // Cleanup code.
{
Log.CloseAndFlush();
};
}
My methodology in using a dynamic type reference in the NEST Client method is so I can avoid using a strongly typed model, this made made life much easier when trying to figure out what the structure was of the data returned from the query by pausing the result on debug and having a peek inside the content structure.
var searchResponse = await _elasticClient.SearchAsync<dynamic>(s => s
//.AllIndices()
.Index("webapp-razor-*")
.Query(q => q
.MatchAll()
)
);
// Once the searchResponse data is returned from the query,
// I then map the results to a View Model
// (which I use for rendering the list of results to my Razor page)
LogsViewModel = new LogsViewModel
{
ScannedEventCount = searchResponse.Hits.Count,
LogEventProperties = new List<LogEventProperties>()
};
foreach (var doc in searchResponse.Documents)
{
var lep = new LogEventProperties();
lep.Timestamp = DateTime.Parse(doc["#timestamp"].ToString());
lep.Level = doc["log.level"];
// Properties
if (((IDictionary<string, object>)doc).ContainsKey("_metadata"))
{
if (((IDictionary<String, object>)doc["_metadata"]).TryGetValue("log_event_category", out object value1)) { lep.LogEventCategory = value1.ToString(); }
if (((IDictionary<String, object>)doc["_metadata"]).TryGetValue("log_event_type", out object value2)) { lep.LogEventType = value2.ToString(); }
if (((IDictionary<String, object>)doc["_metadata"]).TryGetValue("log_event_source", out object value3)) { lep.LogEventSource = value3.ToString(); }
if (((IDictionary<String, object>)doc["_metadata"]).TryGetValue("log_device_id", out object value4)) { lep.LogDeviceId = value4.ToString(); }
if (((IDictionary<String, object>)doc["_metadata"]).TryGetValue("log_country", out object value5)) { lep.LogCountry = value5.ToString(); }
if (((IDictionary<String, object>)doc["_metadata"]).TryGetValue("log_region", out object value6)) { lep.LogRegion = value6.ToString(); }
if (((IDictionary<String, object>)doc["_metadata"]).TryGetValue("log_city", out object value7)) { lep.LogCity = value5.ToString(); }
if (((IDictionary<String, object>)doc["_metadata"]).TryGetValue("log_zip", out object value8)) { lep.LogZip = value5.ToString(); }
if (((IDictionary<String, object>)doc["_metadata"]).TryGetValue("log_latitude", out object value9)) { lep.LogLatitude = value9.ToString(); }
if (((IDictionary<String, object>)doc["_metadata"]).TryGetValue("log_longitude", out object value10)) { lep.LogLongitude = value10.ToString(); }
if (((IDictionary<String, object>)doc["_metadata"]).TryGetValue("log_isp", out object value11)) { lep.LogIsp = value5.ToString(); }
if (((IDictionary<String, object>)doc["_metadata"]).TryGetValue("log_ip_address", out object value12)) { lep.LogIpAddress = value12.ToString(); }
if (((IDictionary<String, object>)doc["_metadata"]).TryGetValue("log_mobile", out object value13)) { lep.LogMobile = value13.ToString(); }
if (((IDictionary<String, object>)doc["_metadata"]).TryGetValue("log_user_id", out object value14)) { lep.LogUserId = value14.ToString(); }
if (((IDictionary<String, object>)doc["_metadata"]).TryGetValue("log_username", out object value15)) { lep.LogUsername = value15.ToString(); }
if (((IDictionary<String, object>)doc["_metadata"]).TryGetValue("log_forename", out object value16)) { lep.LogForename = value16.ToString(); }
if (((IDictionary<String, object>)doc["_metadata"]).TryGetValue("log_surname", out object value17)) { lep.LogSurname = value17.ToString(); }
if (((IDictionary<String, object>)doc["_metadata"]).TryGetValue("log_data", out object value18)) { lep.LogData = value18.ToString(); }
if (((IDictionary<String, object>)doc["_metadata"]).TryGetValue("request_id", out object value19)) { lep.RequestId = value19.ToString(); }
if (((IDictionary<String, object>)doc["_metadata"]).TryGetValue("request_path", out object value20)) { lep.RequestPath = value20.ToString(); }
if (((IDictionary<String, object>)doc["_metadata"]).TryGetValue("connection_id", out object value21)) { lep.ConnectionId = value21.ToString(); }
if (((IDictionary<String, object>)doc["_metadata"]).TryGetValue("memory_usage", out object value22)) { lep.MemoryUsage = (Int64)value22; }
}
// Exception
if (((IDictionary<string, object>)doc).ContainsKey("error"))
{
if (((IDictionary<String, object>)doc["error"]).TryGetValue("message", out object value23)) { lep.ErrorMessage = value23.ToString(); }
if (((IDictionary<String, object>)doc["error"]).TryGetValue("type", out object value24)) { lep.ErrorType = value24.ToString(); }
if (((IDictionary<String, object>)doc["error"]).TryGetValue("stack_trace", out object value25)) { lep.ErrorStackTrace = value25.ToString(); }
}
// Machine Name
if (((IDictionary<string, object>)doc).ContainsKey("host"))
{
if (((IDictionary<String, object>)doc["host"]).TryGetValue("name", out object value26)) { lep.MachineName = value26.ToString(); }
}
// Process
if (((IDictionary<string, object>)doc).ContainsKey("process"))
{
if (((IDictionary<String, object>)doc["process"]["thread"]).TryGetValue("id", out object value27)) { lep.ThreadId = (Int64)value27; }
if (((IDictionary<String, object>)doc["process"]).TryGetValue("pid", out object value28)) { lep.ProcessId = (Int64)value28; }
if (((IDictionary<String, object>)doc["process"]).TryGetValue("name", out object value29)) { lep.ProcessName = value29.ToString(); }
}
LogsViewModel.LogEventProperties.Add(lep);
}
}
return View(LogsViewModel);
The fundamental reason I went with the above method is that some of the documents will not contain all of the structured logging event properties. I had to to derive a way in checking for the existence of the dictionary keys before trying to access the values, otherwise I'd get exception errors when the keys are missing. An example of this is the difference observed between a log event that was generated during an exception versus a log information event for when a user logged into the app.
The two documents below show a slightly different JSON structure which emphasises my decision to fetch the results using a dynamic type. In general, for any documents that I create myself into Elastic, I would usually map the items to a proper model given I would always know ow the full structure beforehand.
{
"took" : 0,
"timed_out" : false,
"_shards" : {
"total" : 1,
"successful" : 1,
"skipped" : 0,
"failed" : 0
},
"hits" : {
"total" : {
"value" : 70,
"relation" : "eq"
},
"max_score" : 1.0,
"hits" : [
{
"_index" : "webapp-razor-2021.05",
"_type" : "_doc",
"_id" : "_2sOPnkBwE4YgJownxnP",
"_score" : 1.0,
"_source" : {
"#timestamp" : "2021-05-05T20:43:34.6041763+01:00",
"log.level" : "Information",
"message" : "\"WebApp-RAZOR\"\"Application Startup\"\"System\"\"Application Starting Up\"",
"_metadata" : {
"message_template" : "{#LogEventCategory}{#LogEventType}{#LogEventSource}{#LogData}",
"log_event_category" : "WebApp-RAZOR",
"log_event_type" : "Application Startup",
"log_event_source" : "System",
"log_data" : "Application Starting Up",
"memory_usage" : 4680920
},
"ecs" : {
"version" : "1.5.0"
},
"event" : {
"severity" : 2,
"timezone" : "GMT Standard Time",
"created" : "2021-05-05T20:43:34.6041763+01:00"
},
"host" : {
"name" : "DESKTOP-OS52032"
},
"log" : {
"logger" : "Elastic.CommonSchema.Serilog",
"original" : null
},
"process" : {
"thread" : {
"id" : 9
},
"pid" : 3868,
"name" : "WebApp-RAZOR",
"executable" : "WebApp-RAZOR"
}
}
},
{
"_index" : "webapp-razor-2021.05",
"_type" : "_doc",
"_id" : "AGsOPnkBwE4YgJowyBrP",
"_score" : 1.0,
"_source" : {
"#timestamp" : "2021-05-05T20:43:44.3936344+01:00",
"log.level" : "Information",
"message" : "\"Open Id Connect\"\"User Sign In\"\"WebApp-RAZOR\"\"United Kingdom\"\"England\"\"MyTown\"\"OX26\"\"51.8951\"\"-1.1585\"\"My ISP\"\"123.456.789.101\"\"False\"\"34vc34-34v34534-44fc-b142-32223ad91ce\"\"joe.bloggs#email.net\"\"joe.bloggs#email.net\"\"Bloggs\"\"User with username [joe.bloggs#email.net] forename [Jose] surname [Bloggs] from IP Address [123.345.789.101] signed into the application [WebApp_RAZOR] Succesfully\"",
"_metadata" : {
"message_template" : "{#LogEventCategory}{#LogEventType}{#LogEventSource}{#LogCountry}{#LogRegion}{#LogCity}{#LogZip}{#LogLatitude}{#LogLongitude}{#LogIsp}{#LogIpAddress}{#LogMobile}{#LogUserId}{#LogUsername}{#LogForename}{#LogSurname}{#LogData}",
"log_event_category" : "Open Id Connect",
"log_event_type" : "User Sign In",
"log_event_source" : "WebApp-RAZOR",
"log_country" : "United Kingdom",
"log_region" : "England",
"log_city" : "MyTown",
"log_zip" : "OX26",
"log_latitude" : "55.1234",
"log_longitude" : "-10.1585",
"log_isp" : "My ISP",
"log_ip_address" : "123.456.789.101",
"log_mobile" : "False",
"log_user_id" : "34vc34-34v3434-44fc-b142-32223ad91ce",
"log_username" : "joe.bloggs#email.net",
"log_forename" : "joe.bloggs#email.net",
"log_surname" : "Bloggs",
"log_data" : "User with username [joe.bloggs#email.net] forename [Joe] surname [Bloggs] from IP Address [123.456.789.101] signed into the application [WebApp_RAZOR] Succesfully",
"request_id" : "0HM8FVO9FFHDD:00000001",
"request_path" : "/signin-oidc",
"connection_id" : "0HM8FVO9FFHDD",
"memory_usage" : 23954480
},
"ecs" : {
"version" : "1.5.0"
},
"event" : {
"severity" : 2,
"timezone" : "GMT Standard Time",
"created" : "2021-05-05T20:43:44.3936344+01:00"
},
"host" : {
"name" : "DESKTOP-OS52032"
},
"log" : {
"logger" : "Elastic.CommonSchema.Serilog",
"original" : null
},
"process" : {
"thread" : {
"id" : 16
},
"pid" : 3868,
"name" : "WebApp-RAZOR",
"executable" : "WebApp-RAZOR"
}
}
},

Aggregating sequence of connected events

Lets say I have events like this in my log
{type:"approval_revokation", approval_id=22}
{type:"approval", request_id=12, approval_id=22}
{type:"control3", request_id=12}
{type:"control2", request_id=12}
{type:"control1", request_id=12}
{type:"request", request_id=12 requesting_user="user1"}
{type:"registration", userid="user1"}
I would like to do a search that aggregates one bucket for each approval_id containing all events connected to it as above. As you see there is not a single id field that can be used throughout the events, but they are all connected in a chain.
The reason I would like this is to feed this into a anomaly detector to verify things like that all controls where executed and validate registration event for a eventual approval.
Can this be done using aggregation or are there any other suggestion?
If there's no single unique "glue" parameter to tie these events together, I'm afraid the only choice is a brute-force map-reduce iterator on all the docs in the index.
After ingesting the above events:
POST _bulk
{"index":{"_index":"events","_type":"_doc"}}
{"type":"approval_revokation","approval_id":22}
{"index":{"_index":"events","_type":"_doc"}}
{"type":"approval","request_id":12,"approval_id":22}
{"index":{"_index":"events","_type":"_doc"}}
{"type":"control3","request_id":12}
{"index":{"_index":"events","_type":"_doc"}}
{"type":"control2","request_id":12}
{"index":{"_index":"events","_type":"_doc"}}
{"type":"control1","request_id":12}
{"index":{"_index":"events","_type":"_doc"}}
{"type":"request","request_id":12,"requesting_user":"user1"}
{"index":{"_index":"events","_type":"_doc"}}
{"type":"registration","userid":"user1"}
we can link them together like so:
POST events/_search
{
"size": 0,
"aggs": {
"log_groups": {
"scripted_metric": {
"init_script": "state.groups = [];",
"map_script": """
int fetchIndex(List groups, def key, def value, def backup_key) {
if (key == null || value == null) {
// nothing to search
return -1
}
return IntStream.range(0, groups.size())
.filter(i -> groups.get(i)['docs']
.stream()
.anyMatch(_doc -> _doc.get(key) == value
|| (backup_key != null
&& _doc.get(backup_key) == value)))
.findFirst()
.orElse(-1);
}
def approval_id = doc['approval_id'].size() != 0
? doc['approval_id'].value
: null;
def request_id = doc['request_id'].size() != 0
? doc['request_id'].value
: null;
def requesting_user = doc['requesting_user.keyword'].size() != 0
? doc['requesting_user.keyword'].value
: null;
def userid = doc['userid.keyword'].size() != 0
? doc['userid.keyword'].value
: null;
HashMap valueMap = ['approval_id':approval_id,
'request_id':request_id,
'requesting_user':requesting_user,
'userid':userid];
def found = false;
for (def entry : valueMap.entrySet()) {
def field = entry.getKey();
def value = entry.getValue();
def backup_key = field == 'userid'
? 'requesting_user'
: field == 'requesting_user'
? 'userid'
: null;
def found_index = fetchIndex(state.groups, field, value, backup_key);
if (found_index != -1) {
state.groups[found_index]['docs'].add(params._source);
if (approval_id != null) {
state.groups[found_index]['approval_id'] = approval_id;
}
found = true;
break;
}
}
if (!found) {
HashMap nextInLine = ['docs': [params._source]];
if (approval_id != null) {
nextInLine['approval_id'] = approval_id;
}
state.groups.add(nextInLine);
}
""",
"combine_script": "return state",
"reduce_script": "return states"
}
}
}
}
returning the grouped events + the inferred approval_id:
"aggregations" : {
"log_groups" : {
"value" : [
{
"groups" : [
{
"docs" : [
{...}, {...}, {...}, {...}, {...}, {...}, {...}
],
"approval_id" : 22
},
{ ... }
]
}
]
}
}
Keep in mind that such scripts are going to be quite slow, esp. when run on large numbers of events.

Group By (Aggregation) to get only latest fields value

I write a query to read the metricbeat file. This gives me the whatever I want but it repeats the value multiple time.
I want to group by this on latest timestamp so I can get only latest record.
Below is my query
string indexName = "metricbeat-7.4.2-" + DateTime.Now.Year.ToString() + "." + DateTime.Now.Month.ToString("00") + "." + DateTime.Now.Day.ToString("00");
connectionSettings = new ConnectionSettings(connectionPool).DefaultIndex(indexName);
elasticClient = new ElasticClient(connectionSettings);
string[] systemFields = new string[]
{
"system.memory.actual.used.pct",
"system.cpu.total.norm.pct"
};
var elasticResponse = elasticClient.Search<object>(s => s
.DocValueFields(dvf => dvf.Fields(systemFields))
);
DSL query
get /metricbeat*/_search?pretty=true
{
"query" : {
"match_all": {}
},
"docvalue_fields" : [
"system.memory.actual.used.pct",
"system.cpu.total.norm.pct",
"system.load.5",
"docker.diskio.summary.bytes"
]
}

Why does Spring Data fail on date queries?

I have records in my mongodb which are like this example record.
{
"_id" : ObjectId("5de6e329bf96cb3f8d253163"),
"changedOn" : ISODate("2019-12-03T22:35:21.126Z"),
"bappid" : "BAPP0131337",
}
I have code which is implemented as:
public List<ChangeEvent> fetchChangeList(Application app, Date from, Date to) {
Criteria criteria = null;
criteria = Criteria.where("bappid").is(app.getBappid());
Query query = Query.query(criteria);
if(from != null && to == null) {
criteria = Criteria.where("changedOn").gte(from);
query.addCriteria(criteria);
}
else if(to != null && from == null) {
criteria = Criteria.where("changedOn").lte(to);
query.addCriteria(criteria);
} else if(from != null && to != null) {
criteria = Criteria.where("changedOn").gte(from).lte(to);
query.addCriteria(criteria);
}
logger.info("Find change list query: {}", query.toString());
List<ChangeEvent> result = mongoOps.find(query, ChangeEvent.class);
return result;
This code always comes up empty. The logging statement generates a log entry like:
Find change list query: Query: { "bappid" : "BAPP0131337", "changedOn" : { "$gte" : { "$date" : 1575418473670 } } }, Fields: { }, Sort: { }
Playing around with variants of the query above in a database which has the record above gets the following results we get.
Returns records:
db["change-events"].find({ "bappid" : "BAPP0131337" }).pretty();
Returns empty set:
db["change-events"].find({ "bappid" : "BAPP0131337", "changedOn" : { "$gte" : { "$date" : 1575418473670 } } }).pretty();
Returns empty set:
db["change-events"].find({ "bappid" : "BAPP0131337", "changedOn" : { "$lte" : { "$date" : 1575418473670 } } }).pretty();
The record returned without the date query should be non empty on one of the two above. But it is empty on both.
What is wrong here?
Since the collection name change-events is different then Class name ChangeEvent so you have to pass the collection name in the find query of mongoOps as below:
List<ChangeEvent> result = mongoOps.find(query, ChangeEvent.class, "change-events");
I have tried it replicating and found that your query without dates in where clause also not working i.e:
Criteria criteria = null;
criteria = Criteria.where("bappid").is(bappid);
Query query = Query.query(criteria);
And the find query on mongoOps as below:
List<ChangeEvent> result = mongoTemplate.find(query, ChangeEvent.class);
Will not work, becuase collection name is missing, below query with collection name execute fine:
List<ChangeEvent> result1 = mongoTemplate.find(query, ChangeEvent.class, "changeEvents");
For details explanation of above discussion you can find out at my Github repo: https://github.com/krishnaiitd/learningJava/blob/master/spring-boot-sample-data-mongodb/src/main/java/sample/data/mongo/main/Application.java#L157

MongoDB dynamic update of collection when changes occurs in another collection

I created two collections using Robomongo :
collection_Project that contains documents like this
{
"_id" : ObjectId("5537ba643a45781cc8912d8f"),
"_Name" : "ProjectName",
"_Guid" : LUUID("16cf098a-fead-9d44-9dc9-f0bf7fb5b60f"),
"_Obj" : [
]
}
that I create with the function
public static void CreateProject(string ProjectName)
{
MongoClient client = new MongoClient("mongodb://localhost/TestCreationMongo");
var db = client.GetServer().GetDatabase("TestMongo");
var collection = db.GetCollection("collection_Project");
var project = new Project
{
_Name = ProjectName,
_Guid = Guid.NewGuid(),
_Obj = new List<c_Object>()
};
collection.Insert(project);
}
and collection_Object that contains documents like this
{
"_id" : ObjectId("5537ba6c3a45781cc8912d90"),
"AssociatedProject" : "ProjectName",
"_Guid" : LUUID("d0a5565d-a0aa-7a4a-9683-b86f1c1de188"),
"First" : 42,
"Second" : 1000
}
That I create with the function
public static void CreateObject(c_Object ToAdd)
{
MongoClient client = new MongoClient("mongodb://localhost/TestCreationMongo");
var db = client.GetServer().GetDatabase("TestMongo");
var collection = db.GetCollection("collection_Object");
collection.Insert(ToAdd);
I update the documents of collection_Project with the function
public static void AddObjToProject(c_Object ObjToAdd, string AssociatedProject)
{
MongoClient client = new MongoClient("mongodb://localhost/TestCreationMongo");
var db = client.GetServer().GetDatabase("TestMongo");
var collection = db.GetCollection<Project>("collection_Project");
var query = Query.EQ("_Name", AssociatedProject);
var update = Update.AddToSetWrapped<c_Object>("_Obj", ObjToAdd);
collection.Update(query, update);
}
so that the documents in collection_Project look like this
{
"_id" : ObjectId("5537ba643a45781cc8912d8f"),
"_Name" : "ProjectName",
"_Guid" : LUUID("16cf098a-fead-9d44-9dc9-f0bf7fb5b60f"),
"_Obj" : [
{
"_id" : ObjectId("5537ba6c3a45781cc8912d90"),
"AssociatedProject" : "ProjectName",
"_Guid" : LUUID("d0a5565d-a0aa-7a4a-9683-b86f1c1de188"),
"First" : 42,
"Second" : 1000
}
]
}
Can I update the document only in the collection_Object and see the change in the collection_Project as well ?
I tried to do that
public static void UpdateObject(c_Object ToUpdate)
{
MongoClient client = new MongoClient("mongodb://localhost/TestCreationMongo");
var db = client.GetServer().GetDatabase("TestMongo");
var collection = db.GetCollection("collection_Object");
var query = Query.EQ("_Guid", ToUpdate._Guid);
var update = Update.Replace<c_Object>(ToUpdate);
collection.Update(query, update);
}
but I the collection_Project doesn't change.
Do you have any clue ?
It looks like you are embedding the 'Object' document inside the 'Project' document, which might be fine, but that approach eliminates the need for your separate collection_Object collection. That is to say, collection_Object is redundant because each object (not just a reference) is actually stored inside the Project document as you have implemented it.
See the documentation for information on using embedded documents.
Alternatively, you could use document references.
The best approach to use depends on your specific use case.

Resources