I have data such this
{
"UID": "a24asdb34-asd42ljdf-ikloewqr",
"createdById" : 1,
"name" : "name1",
"createDate" : 01.14.2019,
"latest" : 369
},
{
"UID": "a24asdb34-asd42ljdf-ikloewqr",
"createdById" : 1,
"name" : "name2",
"createDate" : 01.14.2019
"latest": 395
},
{
"UID": "a24asdb34-asd42ljdf-ikloewqr",
"createdById" : 1,
"name" : "name3",
"createDate" : 01.14.2019,
"latest" : 450
}
i need query which select the one element of document where field latest is greatest than such field in another document elements
Java code
#Query(value ="[ {$sort : {latest: -1}},{$limit : 1} ]",fields = "{ 'UID' : 1, 'name' : 1, createDate : 1}")
Page<MyObject> findByCreatedById(String userId, Pageable pageable);
db.orders.find(
[
{ $sort: { latest: -1 } },
{ $limit: 1 }
]
)
Sort in descending order of the latest field and limit the result size to 1.
Related
I want to insert nested Structure in Elastic Search.
For Example :
[
{ "Product" : "P1",
"Desc" : "productDesc",
"Items":[{
"I1": "i1",
"I_desc" : "i1_desc",
"prices" :[{
"id" : "price1",
"value" : 10
},{
"id" : "price2",
"value" : 20
}]
},
{
"I2": "i2",
"I_desc" : "i2_desc",
"prices" :[{
"id" : "price1",
"value" : 10
},{
"id" : "price",
"value" : 20
}]
}]
},
{ "Product" : "P12",
"Desc" : "product2Desc",
"Items":[{
"I1": "i1",
"I_desc" : "i1_desc",
"prices" :[{
"id" : "price11",
"value" : 12
},{
"id" : "price12",
"value" : 10
}]
},{
"I2": "i3",
"I_desc" : "i3_desc",
"prices" :[{
"id" : "price11",
"value" : 12
},{
"id" : "price31",
"value" : 33
}]
}]
}
]
I want to insert similar to this nested structure in Elastic Serach with index pro and id = P1 and P12 (2 insert data).
Then query for the data like
1. Give me all Product Ids -> which has prices -> id = price11
2. All Products which has item = i1
Should I use single index to Id or index all the attributes like Item, productDesc, prices, id , value?
I have mongo collection:
{
"_id" : 123,
"index" : "111",
"students" : [
{
"firstname" : "Mark",
"lastname" : "Smith"),
}
],
}
{
"_id" : 456,
"index" : "222",
"students" : [
{
"firstname" : "Mark",
"lastname" : "Smith"),
}
],
}
{
"_id" : 789,
"index" : "333",
"students" : [
{
"firstname" : "Neil",
"lastname" : "Smith"),
},
{
"firstname" : "Sofia",
"lastname" : "Smith"),
}
],
}
I want to get document that has index that is in the set of the given indexes, for example givenSet = ["111","333"] and has min length of students array.
Result should be the first document with _id:123, because its index is in the givenSet and studentsArrayLength = 1, which is smaller than third.
I need to write custom JSON #Query for Spring Mongo Repository. I am new to Mongo and am stuck a bit with this problem.
I wrote something like this:
#Query("{'index':{$in : ?0}, length:{$size:$students}, $sort:{length:-1}, $limit:1}")
Department getByMinStudentsSize(Set<String> indexes);
And got error: error message '$size needs a number'
Should I just use .count() or something like that?
you should use the aggregation framework for this type of query.
filter the result based on your condition.
add a new field and assign the array size to it.
sort based on the new field.
limit the result.
the solution should look something like this:
db.collection.aggregate([
{
"$match": {
index: {
"$in": [
"111",
"333"
]
}
}
},
{
"$addFields": {
"students_size": {
"$size": "$students"
}
}
},
{
"$sort": {
students_size: 1
}
},
{
"$limit": 1
}
])
working example: https://mongoplayground.net/p/ih4KqGg25i6
You are getting the issue because the second param should be enclosed in curly braces. And second param is projection
#Query("{{'index':{$in : ?0}}, {length:{$size:'$students'}}, $sort:{length:1}, $limit:1}")
Department getByMinStudentsSize(Set<String> indexes);
Below is the mongodb query :
db.collection.aggregate(
[
{
"$match" : {
"index" : {
"$in" : [
"111",
"333"
]
}
}
},
{
"$project" : {
"studentsSize" : {
"$size" : "$students"
},
"students" : 1.0
}
},
{
"$sort" : {
"studentsSize" : 1.0
}
},
{
"$limit" : 1.0
}
],
{
"allowDiskUse" : false
}
);
I have an index for an example airport:
public class FlightIndex
{
public int Id { get; set; }
[keyword]
public string Destination { get; set; }
}
The Destination field stores data like "London Airport," :London Airport (XYZ)," and "London Airport (ABC)."
I would like to search and return the exact match on Destination.
In the query below, I want a list of flights whose destination matches the destination list provided:
q.Terms(m => m.Field(f => f.Destination).Terms(parameters.Destinations
.Select(_ => _.ToLower()).ToList()));
For example, if parameters.Destinations contains "London Airport (ABC)," then nothing is returned, but if it has "London Airport," it returns the ones with "London Airport."
It does not seem to work with the brackets.
I'm not sure if it needs/can to be escaped.
It sounds very much like Destination is not indexed as a keyword datatype; if it was, a terms query would return matches for verbatim input values. Additionally, parentheses would not make a difference, the indexed value would either match exactly or not.
I would check the mapping in the target index with the Get Mapping API.
Here's an example to demonstrate it working
var client = new ElasticClient(settings);
if (client.IndexExists("example").Exists)
{
client.DeleteIndex("example");
}
client.CreateIndex("example", c => c
.Mappings(m => m
.Map<FlightIndex>(mm => mm
.AutoMap()
)
)
);
client.Index(new FlightIndex { Id = 1, Destination = "London Airport (XYZ)" }, i => i
.Index("example")
.Refresh(Refresh.WaitFor)
);
client.Search<FlightIndex>(s => s
.Index("example")
.Query(q => q
.Terms(t => t
.Field(f => f.Destination)
.Terms("London Airport (XYZ)")
)
)
);
sends the following requests and receives responses
HEAD http://localhost:9200/example?pretty=true
Status: 200
------------------------------
PUT http://localhost:9200/example?pretty=true
{
"mappings": {
"flightindex": {
"properties": {
"id": {
"type": "integer"
},
"destination": {
"type": "keyword"
}
}
}
}
}
Status: 200
{
"acknowledged" : true,
"shards_acknowledged" : true,
"index" : "example"
}
------------------------------
PUT http://localhost:9200/example/flightindex/1?pretty=true&refresh=wait_for
{
"id": 1,
"destination": "London Airport (XYZ)"
}
Status: 201
{
"_index" : "example",
"_type" : "flightindex",
"_id" : "1",
"_version" : 1,
"result" : "created",
"_shards" : {
"total" : 2,
"successful" : 1,
"failed" : 0
},
"_seq_no" : 0,
"_primary_term" : 1
}
------------------------------
POST http://localhost:9200/example/flightindex/_search?pretty=true&typed_keys=true
{
"query": {
"terms": {
"destination": [
"London Airport (XYZ)"
]
}
}
}
Status: 200
{
"took" : 13,
"timed_out" : false,
"_shards" : {
"total" : 5,
"successful" : 5,
"skipped" : 0,
"failed" : 0
},
"hits" : {
"total" : 1,
"max_score" : 1.0,
"hits" : [
{
"_index" : "example",
"_type" : "flightindex",
"_id" : "1",
"_score" : 1.0,
"_source" : {
"id" : 1,
"destination" : "London Airport (XYZ)"
}
}
]
}
}
------------------------------
ai have some mongodb document
horses is array with id, name, type
{
"_id" : 33333333333,
"horses" : [
{
"id" : 72029,
"name" : "Awol",
"type" : "flat",
},
{
"id" : 822881,
"name" : "Give Us A Reason",
"type" : "flat",
},
{
"id" : 826474,
"name" : "Arabian Revolution",
"type" : "flat",
}
}
I need to add new fields
I thought something like that, but I did not go to his head
horse = {
"place" : 1,
"body" : 11
}
Card.where({'_id' => 33333333333}).find_and_modify({'$set' => {'horses.' + index.to_s => horse}}, upsert:true)
But all existing fields are removed and inserted new how to do that would be new fields added to existing
Indeed, this command will overwrite the subdocument
'$set': {
'horses.0': {
"place" : 1,
"body" : 11
}
}
You need to set individual fields:
'$set': {
'horses.0.place': 1,
'horses.0.body': 11
}
I am using FindAndModify in MongoDB in several concurrent processes. The collection size is about 3 million entries and everything works like a blast as long as I don't pass a sorting option (by an indexed field). Once I try to do so, the following warning is spawned to the logs:
warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test_db.wengine_queue top:
{
opid: 424210,
active: true,
lockType: "write",
waitingForLock: false,
secs_running: 0,
op: "query",
ns: "test_db",
query: {
findAndModify: "wengine_queue",
query: {
locked: { $ne: 1 },
rule_completed: { $in: [ "", "0", null ] },
execute_at: { $lt: 1324381363 },
company_id: 23,
debug: 0,
system_id: "AK/AK1201"
},
update: {
$set: { locked: 1 }
},
sort: {
execute_at: -1
}
},
client: "127.0.0.1:60873",
desc: "conn",
threadId: "0x1541bb000",
connectionId: 1147,
numYields: 0
}
I do have all the keys from the query indexed, here they are:
PRIMARY> db.wengine_queue.getIndexes()
[
{
"v" : 1,
"key" : {
"_id" : 1
},
"ns" : "test_db.wengine_queue",
"name" : "_id_"
},
{
"v" : 1,
"key" : {
"system_id" : 1,
"company_id" : 1,
"locked" : 1,
"rule_completed" : 1,
"execute_at" : -1,
"debug" : 1
},
"ns" : "test_db.wengine_queue",
"name" : "system_id_1_company_id_1_locked_1_rule_completed_1_execute_at_-1_debug_1"
},
{
"v" : 1,
"key" : {
"debug" : 1
},
"ns" : "test_db.wengine_queue",
"name" : "debug_1"
},
{
"v" : 1,
"key" : {
"system_id" : 1
},
"ns" : "test_db.wengine_queue",
"name" : "system_id_1"
},
{
"v" : 1,
"key" : {
"company_id" : 1
},
"ns" : "test_db.wengine_queue",
"name" : "company_id_1"
},
{
"v" : 1,
"key" : {
"locked" : 1
},
"ns" : "test_db.wengine_queue",
"name" : "locked_1"
},
{
"v" : 1,
"key" : {
"rule_completed" : 1
},
"ns" : "test_db.wengine_queue",
"name" : "rule_completed_1"
},
{
"v" : 1,
"key" : {
"execute_at" : -1
},
"ns" : "test_db.wengine_queue",
"name" : "execute_at_-1"
},
{
"v" : 1,
"key" : {
"thread_id" : 1
},
"ns" : "test_db.wengine_queue",
"name" : "thread_id_1"
},
{
"v" : 1,
"key" : {
"rule_id" : 1
},
"ns" : "test_db.wengine_queue",
"name" : "rule_id_1"
}
]
Is there any way around this?
For those interested -- I had to create a separate index ending with the key that the set is to be sorted by.
That warning is thrown when an operation that wants to yield (such as long updates, removes, etc.) cannot do so because it cannot release the lock it's holding for whatever reason.
Do you have the field you're sorting on indexed? If not adding an index for that will probably remove the warnings.