MongoDB FindAndModify Sorting - sorting

I am using FindAndModify in MongoDB in several concurrent processes. The collection size is about 3 million entries and everything works like a blast as long as I don't pass a sorting option (by an indexed field). Once I try to do so, the following warning is spawned to the logs:
warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test_db.wengine_queue top:
{
opid: 424210,
active: true,
lockType: "write",
waitingForLock: false,
secs_running: 0,
op: "query",
ns: "test_db",
query: {
findAndModify: "wengine_queue",
query: {
locked: { $ne: 1 },
rule_completed: { $in: [ "", "0", null ] },
execute_at: { $lt: 1324381363 },
company_id: 23,
debug: 0,
system_id: "AK/AK1201"
},
update: {
$set: { locked: 1 }
},
sort: {
execute_at: -1
}
},
client: "127.0.0.1:60873",
desc: "conn",
threadId: "0x1541bb000",
connectionId: 1147,
numYields: 0
}
I do have all the keys from the query indexed, here they are:
PRIMARY> db.wengine_queue.getIndexes()
[
{
"v" : 1,
"key" : {
"_id" : 1
},
"ns" : "test_db.wengine_queue",
"name" : "_id_"
},
{
"v" : 1,
"key" : {
"system_id" : 1,
"company_id" : 1,
"locked" : 1,
"rule_completed" : 1,
"execute_at" : -1,
"debug" : 1
},
"ns" : "test_db.wengine_queue",
"name" : "system_id_1_company_id_1_locked_1_rule_completed_1_execute_at_-1_debug_1"
},
{
"v" : 1,
"key" : {
"debug" : 1
},
"ns" : "test_db.wengine_queue",
"name" : "debug_1"
},
{
"v" : 1,
"key" : {
"system_id" : 1
},
"ns" : "test_db.wengine_queue",
"name" : "system_id_1"
},
{
"v" : 1,
"key" : {
"company_id" : 1
},
"ns" : "test_db.wengine_queue",
"name" : "company_id_1"
},
{
"v" : 1,
"key" : {
"locked" : 1
},
"ns" : "test_db.wengine_queue",
"name" : "locked_1"
},
{
"v" : 1,
"key" : {
"rule_completed" : 1
},
"ns" : "test_db.wengine_queue",
"name" : "rule_completed_1"
},
{
"v" : 1,
"key" : {
"execute_at" : -1
},
"ns" : "test_db.wengine_queue",
"name" : "execute_at_-1"
},
{
"v" : 1,
"key" : {
"thread_id" : 1
},
"ns" : "test_db.wengine_queue",
"name" : "thread_id_1"
},
{
"v" : 1,
"key" : {
"rule_id" : 1
},
"ns" : "test_db.wengine_queue",
"name" : "rule_id_1"
}
]
Is there any way around this?

For those interested -- I had to create a separate index ending with the key that the set is to be sorted by.

That warning is thrown when an operation that wants to yield (such as long updates, removes, etc.) cannot do so because it cannot release the lock it's holding for whatever reason.
Do you have the field you're sorting on indexed? If not adding an index for that will probably remove the warnings.

Related

MongoDB hangs randomly

Overview
I have a ruby application that uses MongoDB as a database. While running tests for this application I am creating collections and indexes for every test case using Minitest.
The test environment is created using docker compose where one container is running the tests and the other container is running MongoDB.
Problem
When running the tests for the first time, after a while MongoDB gets stuck. Any request to query the collections doesn't respond.
I was able to connect to it before the tests started running using the command line client. When I checked the state of the server using db.serverStatus() I see that some operations have acquired locks. Looking at the globalLock field I understand that 1 operation has write lock and there are 2 operations are waiting to acquire read lock.
I am unable to understand why would these operations hang and not yield the locks. I have no idea how to debug this problem further.
MongoDB Version: 3.6.13
Ruby Driver version: 2.8.0
I've also tried other versions 3.6.x and 4.0
Any help or direction is highly appreciated.
db.serverStatus output
{
"host" : "c658c885eb90",
"version" : "3.6.14",
"process" : "mongod",
"pid" : NumberLong(1),
"uptime" : 98,
"uptimeMillis" : NumberLong(97909),
"uptimeEstimate" : NumberLong(97),
"localTime" : ISODate("2019-11-03T16:09:14.289Z"),
"asserts" : {
"regular" : 0,
"warning" : 0,
"msg" : 0,
"user" : 0,
"rollovers" : 0
},
"connections" : {
"current" : 6,
"available" : 838854,
"totalCreated" : 11
},
"extra_info" : {
"note" : "fields vary by platform",
"page_faults" : 0
},
"globalLock" : {
"totalTime" : NumberLong(97908000),
"currentQueue" : {
"total" : 2,
"readers" : 2,
"writers" : 0
},
"activeClients" : {
"total" : 13,
"readers" : 0,
"writers" : 1
}
},
"locks" : {
"Global" : {
"acquireCount" : {
"r" : NumberLong(14528),
"w" : NumberLong(12477),
"W" : NumberLong(5)
}
},
"Database" : {
"acquireCount" : {
"r" : NumberLong(1020),
"w" : NumberLong(14459),
"R" : NumberLong(3),
"W" : NumberLong(6599)
},
"acquireWaitCount" : {
"r" : NumberLong(2)
},
"timeAcquiringMicros" : {
"r" : NumberLong(76077321)
}
},
"Collection" : {
"acquireCount" : {
"R" : NumberLong(1018),
"W" : NumberLong(8805)
}
},
"Metadata" : {
"acquireCount" : {
"W" : NumberLong(37)
}
}
},
"logicalSessionRecordCache" : {
"activeSessionsCount" : 3,
"sessionsCollectionJobCount" : 1,
"lastSessionsCollectionJobDurationMillis" : 0,
"lastSessionsCollectionJobTimestamp" : ISODate("2019-11-03T16:07:36.407Z"),
"lastSessionsCollectionJobEntriesRefreshed" : 0,
"lastSessionsCollectionJobEntriesEnded" : 0,
"lastSessionsCollectionJobCursorsClosed" : 0,
"transactionReaperJobCount" : 0,
"lastTransactionReaperJobDurationMillis" : 0,
"lastTransactionReaperJobTimestamp" : ISODate("2019-11-03T16:07:36.407Z"),
"lastTransactionReaperJobEntriesCleanedUp" : 0
},
"network" : {
"bytesIn" : NumberLong(1682811),
"bytesOut" : NumberLong(1019834),
"physicalBytesIn" : NumberLong(1682811),
"physicalBytesOut" : NumberLong(1019834),
"numRequests" : NumberLong(7822),
"compression" : {
"snappy" : {
"compressor" : {
"bytesIn" : NumberLong(0),
"bytesOut" : NumberLong(0)
},
"decompressor" : {
"bytesIn" : NumberLong(0),
"bytesOut" : NumberLong(0)
}
}
},
"serviceExecutorTaskStats" : {
"executor" : "passthrough",
"threadsRunning" : 6
}
},
"opLatencies" : {
"reads" : {
"latency" : NumberLong(61374),
"ops" : NumberLong(963)
},
"writes" : {
"latency" : NumberLong(13074),
"ops" : NumberLong(286)
},
"commands" : {
"latency" : NumberLong(988232),
"ops" : NumberLong(6570)
}
},
"opReadConcernCounters" : {
"available" : NumberLong(0),
"linearizable" : NumberLong(0),
"local" : NumberLong(0),
"majority" : NumberLong(0),
"none" : NumberLong(944)
},
"opcounters" : {
"insert" : 246,
"query" : 944,
"update" : 40,
"delete" : 0,
"getmore" : 0,
"command" : 6595
},
"opcountersRepl" : {
"insert" : 0,
"query" : 0,
"update" : 0,
"delete" : 0,
"getmore" : 0,
"command" : 0
},
"storageEngine" : {
"name" : "ephemeralForTest",
"supportsCommittedReads" : false,
"readOnly" : false,
"persistent" : false
},
"tcmalloc" : {
"generic" : {
"current_allocated_bytes" : 8203504,
"heap_size" : 12496896
},
"tcmalloc" : {
"pageheap_free_bytes" : 2760704,
"pageheap_unmapped_bytes" : 0,
"max_total_thread_cache_bytes" : 516947968,
"current_total_thread_cache_bytes" : 1007120,
"total_free_bytes" : 1532688,
"central_cache_free_bytes" : 231040,
"transfer_cache_free_bytes" : 294528,
"thread_cache_free_bytes" : 1007120,
"aggressive_memory_decommit" : 0,
"pageheap_committed_bytes" : 12496896,
"pageheap_scavenge_count" : 0,
"pageheap_commit_count" : 9,
"pageheap_total_commit_bytes" : 12496896,
"pageheap_decommit_count" : 0,
"pageheap_total_decommit_bytes" : 0,
"pageheap_reserve_count" : 9,
"pageheap_total_reserve_bytes" : 12496896,
"spinlock_total_delay_ns" : 0,
"formattedString" : "------------------------------------------------\nMALLOC: 8204080 ( 7.8 MiB) Bytes in use by application\nMALLOC: + 2760704 ( 2.6 MiB) Bytes in page heap freelist\nMALLOC: + 231040 ( 0.2 MiB) Bytes in central cache freelist\nMALLOC: + 294528 ( 0.3 MiB) Bytes in transfer cache freelist\nMALLOC: + 1006544 ( 1.0 MiB) Bytes in thread cache freelists\nMALLOC: + 1204480 ( 1.1 MiB) Bytes in malloc metadata\nMALLOC: ------------\nMALLOC: = 13701376 ( 13.1 MiB) Actual memory used (physical + swap)\nMALLOC: + 0 ( 0.0 MiB) Bytes released to OS (aka unmapped)\nMALLOC: ------------\nMALLOC: = 13701376 ( 13.1 MiB) Virtual address space used\nMALLOC:\nMALLOC: 415 Spans in use\nMALLOC: 18 Thread heaps in use\nMALLOC: 4096 Tcmalloc page size\n------------------------------------------------\nCall ReleaseFreeMemory() to release freelist memory to the OS (via madvise()).\nBytes released to the OS take up virtual address space but no physical memory.\n"
}
},
"transactions" : {
"retriedCommandsCount" : NumberLong(0),
"retriedStatementsCount" : NumberLong(0),
"transactionsCollectionWriteCount" : NumberLong(0)
},
"transportSecurity" : {
"1.0" : NumberLong(0),
"1.1" : NumberLong(0),
"1.2" : NumberLong(0),
"1.3" : NumberLong(0),
"unknown" : NumberLong(0)
},
"mem" : {
"bits" : 64,
"resident" : 41,
"virtual" : 836,
"supported" : true,
"mapped" : 0
},
"metrics" : {
"commands" : {
"buildInfo" : {
"failed" : NumberLong(0),
"total" : NumberLong(2)
},
"count" : {
"failed" : NumberLong(0),
"total" : NumberLong(21)
},
"createIndexes" : {
"failed" : NumberLong(0),
"total" : NumberLong(5656)
},
"drop" : {
"failed" : NumberLong(0),
"total" : NumberLong(784)
},
"dropIndexes" : {
"failed" : NumberLong(87),
"total" : NumberLong(87)
},
"find" : {
"failed" : NumberLong(0),
"total" : NumberLong(944)
},
"getLog" : {
"failed" : NumberLong(0),
"total" : NumberLong(1)
},
"insert" : {
"failed" : NumberLong(0),
"total" : NumberLong(246)
},
"isMaster" : {
"failed" : NumberLong(0),
"total" : NumberLong(38)
},
"listCollections" : {
"failed" : NumberLong(0),
"total" : NumberLong(1)
},
"listIndexes" : {
"failed" : NumberLong(1),
"total" : NumberLong(1)
},
"replSetGetStatus" : {
"failed" : NumberLong(1),
"total" : NumberLong(1)
},
"serverStatus" : {
"failed" : NumberLong(0),
"total" : NumberLong(2)
},
"update" : {
"failed" : NumberLong(0),
"total" : NumberLong(40)
},
"whatsmyuri" : {
"failed" : NumberLong(0),
"total" : NumberLong(1)
}
},
"cursor" : {
"timedOut" : NumberLong(0),
"open" : {
"noTimeout" : NumberLong(0),
"pinned" : NumberLong(0),
"total" : NumberLong(0)
}
},
"document" : {
"deleted" : NumberLong(0),
"inserted" : NumberLong(246),
"returned" : NumberLong(398),
"updated" : NumberLong(40)
},
"getLastError" : {
"wtime" : {
"num" : 0,
"totalMillis" : 0
},
"wtimeouts" : NumberLong(0)
},
"operation" : {
"scanAndOrder" : NumberLong(0),
"writeConflicts" : NumberLong(0)
},
"query" : {
"updateOneOpStyleBroadcastWithExactIDCount" : NumberLong(0),
"upsertReplacementCannotTargetByQueryCount" : NumberLong(0)
},
"queryExecutor" : {
"scanned" : NumberLong(435),
"scannedObjects" : NumberLong(438)
},
"record" : {
"moves" : NumberLong(0)
},
"repl" : {
"executor" : {
"pool" : {
"inProgressCount" : 0
},
"queues" : {
"networkInProgress" : 0,
"sleepers" : 0
},
"unsignaledEvents" : 0,
"shuttingDown" : false,
"networkInterface" : "\nNetworkInterfaceASIO Operations' Diagnostic:\nOperation: Count: \nConnecting 0 \nIn Progress 0 \nSucceeded 0 \nCanceled 0 \nFailed 0 \nTimed Out 0 \n\n"
},
"apply" : {
"attemptsToBecomeSecondary" : NumberLong(0),
"batchSize" : NumberLong(0),
"batches" : {
"num" : 0,
"totalMillis" : 0
},
"ops" : NumberLong(0)
},
"buffer" : {
"count" : NumberLong(0),
"maxSizeBytes" : NumberLong(0),
"sizeBytes" : NumberLong(0)
},
"initialSync" : {
"completed" : NumberLong(0),
"failedAttempts" : NumberLong(0),
"failures" : NumberLong(0)
},
"network" : {
"bytes" : NumberLong(0),
"getmores" : {
"num" : 0,
"totalMillis" : 0
},
"ops" : NumberLong(0),
"readersCreated" : NumberLong(0)
},
"preload" : {
"docs" : {
"num" : 0,
"totalMillis" : 0
},
"indexes" : {
"num" : 0,
"totalMillis" : 0
}
}
},
"storage" : {
"freelist" : {
"search" : {
"bucketExhausted" : NumberLong(0),
"requests" : NumberLong(0),
"scanned" : NumberLong(0)
}
}
},
"ttl" : {
"deletedDocuments" : NumberLong(0),
"passes" : NumberLong(1)
}
},
"ok" : 1
}

threejs gltf alpha mode blend option does not work at some angles

I have a gltf in which I apply png textures on rectangular mesh. I have a rectangle png and circle png. rectangle node is at z = 0.01 and circle at z = 0.0. alpha mode used for the materials is BLEND. material is double sided.
GLTF
{
"scenes" : [
{
"nodes" : [
0
]
}
],
"nodes" : [
{
"name" : "Node_0",
"children" : [
1,
3
]
},
{
"name" : "Symbol 2",
"children" : [
2
],
"translation" : [
240.25,
-126.300003,
0
]
},
{
"name" : "Node_2",
"mesh" : 0,
"scale" : [
0.656657,
0.656657,
1
]
},
{
"name" : "Symbol 1",
"children" : [
4
],
"translation" : [
170,
-89.050003,
0
]
},
{
"name" : "Node_4",
"mesh" : 1,
"translation" : [
0,
0,
-0.01
],
"scale" : [
2.059968,
1.399979,
1
]
}
],
"meshes" : [
{
"primitives" : [
{
"attributes" : {
"POSITION" : 1,
"TEXCOORD_0" : 2
},
"indices" : 0,
"material" : 0
}
]
},
{
"primitives" : [
{
"attributes" : {
"POSITION" : 1,
"TEXCOORD_0" : 2
},
"indices" : 0,
"material" : 1
}
]
}
],
"buffers" : [
{
"uri" : "data:application/octet-stream;base64,AQAAAAAAAAADAAAAAwAAAAIAAAABAAAAAAAAAAAAAIAAAAAAAAAAAAAAyMIAAAAAAADIQgAAyMIAAAAAAADIQgAAAIAAAAAAAAAAAAAAAAAAAAAAAACAPwAAgD8AAIA/AACAPwAAAAAAAAAAq6oqPacaKD+nGig/AACAP6caKD+nGig/AACAPwAAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAACAAACAPwAAAACrqio9AACAPwAAgD8AAIA/AACAPwAAgD8AAIA/AAAAAABAcEOamfzCAAAAAAAAAAAAAAAAAAAAAAAAAIAAAIA/AAAAAKuqKj2G1gNAfzKzPwAAgD+G1gNAfzKzPwAAgD8AAAAAAAAAAAAAAIAK1yO8AAAAAAAAAAAAAAAAAAAAgAAAgD8AAAAAq6oqPQAAgD8AAIA/AACAPwAAgD8AAIA/AACAPwAAAAAAACpDmhmywgAAAAAAAAAAAAAAAAAAAAAAAACAAACAPw==",
"byteLength" : 376
}
],
"bufferViews" : [
{
"buffer" : 0,
"byteOffset" : 0,
"byteLength" : 24,
"target" : 34963
},
{
"buffer" : 0,
"byteOffset" : 24,
"byteLength" : 48,
"target" : 34962
},
{
"buffer" : 0,
"byteOffset" : 72,
"byteLength" : 32,
"target" : 34962
}
],
"accessors" : [
{
"name" : "accessor_0",
"bufferView" : 0,
"byteOffset" : 0,
"componentType" : 5125,
"count" : 6,
"type" : "SCALAR",
"max" : [
3
],
"min" : [
0
]
},
{
"name" : "accessor_1",
"bufferView" : 1,
"byteOffset" : 0,
"componentType" : 5126,
"count" : 4,
"type" : "VEC3",
"max" : [
100,
0,
0
],
"min" : [
0,
-100,
0
]
},
{
"name" : "accessor_2",
"bufferView" : 2,
"byteOffset" : 0,
"componentType" : 5126,
"count" : 4,
"type" : "VEC2",
"max" : [
1,
1
],
"min" : [
0,
0
]
}
],
"materials" : [
{
"pbrMetallicRoughness" : {
"baseColorTexture" : {
"index" : 0
}
},
"alphaMode" : "BLEND",
"doubleSided" : true
},
{
"pbrMetallicRoughness" : {
"baseColorTexture" : {
"index" : 1
}
},
"alphaMode" : "BLEND",
"doubleSided" : true
}
],
"samplers" : [
{
"magFilter" : 9729,
"minFilter" : 9987,
"wrapS" : 33071,
"wrapT" : 33071
}
],
"textures" : [
{
"sampler" : 0,
"source" : 0
},
{
"sampler" : 0,
"source" : 1
}
],
"images" : [
{
"uri" : "Image0.png"
},
{
"uri" : "Image1.png"
}
],
"asset" : {
"version" : "2.0"
}
}
PNGs
I am using ThreeJs gltf viewer. https://gltf-viewer.donmccurdy.com/
blending works fine at some angles but when i rotate around some angles blending does not work. screenshots
Can Someone explain this behaviour to me and how can i achieve correct blending at all angles.
If these are three separate nodes, and they all live at 0,0,0 you might have a sorting problem. GLTF is readable, but I'm not as familiar with the spec to tell what is going on. The nodes mentioned in the file that do not have a translation field, might all be positioned at 0,0,0
Either way, the remedy for something like this, if you want to keep sorting is to assign a different yourMesh.renderOrder = yourDesiredOrder. So for these elements you could set 1,2,3,4... and control when you want them to draw / tell the sorting to consider these weights.

Number of records processed in logstash

We're using logstash to sync Elastic search and we've around 3 million documents. It takes 3 to 4 hours to sync. Currently all we get is, it is started and stopped. Is there any way to see how many records processed in logstash ?
If you're using Logstash 5 and higher, the Logstash Monitoring API can help you. You can see and monitor what's happening inside Logstash as it processes events. If you hit the Pipeline stats API you'll get the total number of processed events per stage and plugin (input/filter/output):
curl -XGET 'localhost:9600/_node/stats/pipelines?pretty'
You'll get this type of response in which you can clearly see at any time how many events have been processed:
{
"pipelines" : {
"test" : {
"events" : {
"duration_in_millis" : 365495,
"in" : 216485,
"filtered" : 216485,
"out" : 216485,
"queue_push_duration_in_millis" : 342466
},
"plugins" : {
"inputs" : [ {
"id" : "35131f351e2dc5ed13ee04265a8a5a1f95292165-1",
"events" : {
"out" : 216485,
"queue_push_duration_in_millis" : 342466
},
"name" : "beats"
} ],
"filters" : [ {
"id" : "35131f351e2dc5ed13ee04265a8a5a1f95292165-2",
"events" : {
"duration_in_millis" : 55969,
"in" : 216485,
"out" : 216485
},
"failures" : 216485,
"patterns_per_field" : {
"message" : 1
},
"name" : "grok"
}, {
"id" : "35131f351e2dc5ed13ee04265a8a5a1f95292165-3",
"events" : {
"duration_in_millis" : 3326,
"in" : 216485,
"out" : 216485
},
"name" : "geoip"
} ],
"outputs" : [ {
"id" : "35131f351e2dc5ed13ee04265a8a5a1f95292165-4",
"events" : {
"duration_in_millis" : 278557,
"in" : 216485,
"out" : 216485
},
"name" : "elasticsearch"
} ]
},
"reloads" : {
"last_error" : null,
"successes" : 0,
"last_success_timestamp" : null,
"last_failure_timestamp" : null,
"failures" : 0
},
"queue" : {
"type" : "memory"
}
}
}

Having trouble with a slow MongoDB aggregation query

I have the following aggregation query in MongoDB
return mongoose.model('Submission')
.aggregate([
{ $match: { client: { $in: clientIds }, admin: this._admin._id } },
{ $sort: { client: 1, submitted: -1 } },
{ $group: {
_id: '$client',
lastSubmitted: { $first: '$submitted' },
timezone: { $first: '$timezone' },
} },
])
.exec();
which is performing really badly on a collection with about 2000 documents. It usually takes 5 seconds to complete and I've seen as bad as 15 seconds. I have the following index on the submissions collection:
{
client : 1,
admin : 1,
assessment : 1,
submitted : -1,
}
I'm stuck as to why it's taking so long. Any suggestions?
EDIT
I've run the query
db.submissions.aggregate([
{$match: {
client: {$in: ['54a4cdfdd0666c243035dc98','55cc985291a0ffab6849de34']},
admin: '542b4af8880fc300007eb411'
}},
{$sort: {client:1, submitted: -1}},
{$group: {
_id: '$client',
lastSubmitted: {$first: '$submitted'},
timezone: {$first: '$timezone'}
}}
], {explain: true})
in the shell with explain and got
{
"stages" : [
{
"$cursor" : {
"query" : {
"client" : {
"$in" : [
"54a4cdfdd0666c243035dc98",
"55cc985291a0ffab6849de34"
]
},
"admin" : "542b4af8880fc300007eb411"
},
"fields" : {
"client" : 1,
"submitted" : 1,
"timezone" : 1,
"_id" : 0
},
"queryPlanner" : {
"plannerVersion" : 1,
"namespace" : "webdemo.submissions",
"indexFilterSet" : false,
"parsedQuery" : {
"$and" : [
{
"admin" : {
"$eq" : "542b4af8880fc300007eb411"
}
},
{
"client" : {
"$in" : [
"54a4cdfdd0666c243035dc98",
"55cc985291a0ffab6849de34"
]
}
}
]
},
"winningPlan" : {
"stage" : "FETCH",
"inputStage" : {
"stage" : "IXSCAN",
"keyPattern" : {
"client" : 1,
"admin" : 1,
"assessment" : 1,
"submitted" : -1
},
"indexName" : "client_1_admin_1_assessment_1_submitted_-1",
"isMultiKey" : false,
"direction" : "forward",
"indexBounds" : {
"client" : [
"[\"54a4cdfdd0666c243035dc98\", \"54a4cdfdd0666c243035dc98\"]",
"[\"55cc985291a0ffab6849de34\", \"55cc985291a0ffab6849de34\"]"
],
"admin" : [
"[\"542b4af8880fc300007eb411\", \"542b4af8880fc300007eb411\"]"
],
"assessment" : [
"[MinKey, MaxKey]"
],
"submitted" : [
"[MaxKey, MinKey]"
]
}
}
},
"rejectedPlans" : [ ]
}
}
},
{
"$sort" : {
"sortKey" : {
"client" : 1,
"submitted" : -1
}
}
},
{
"$group" : {
"_id" : "$client",
"lastSubmitted" : {
"$first" : "$submitted"
},
"timezone" : {
"$first" : "$timezone"
}
}
}
],
"ok" : 1
}
EDIT 2
The output I get from db.submissions.getIndices() is
[
{
"v" : 1,
"key" : {
"_id" : 1
},
"name" : "_id_",
"ns" : "webdemo.submissions"
},
{
"v" : 1,
"key" : {
"client" : 1,
"admin" : 1,
"assessment" : 1,
"submitted" : -1
},
"name" : "client_1_admin_1_assessment_1_submitted_-1",
"ns" : "webdemo.submissions",
"background" : true
}
]

Update an existing collection mongodb

I have a collection
{
"_id" : 100000001,
"horses" : []
"race" : {
"date" : ISODate("2014-06-05T00:00:00.000Z"),
"time" : ISODate("2014-06-05T02:40:00.000Z"),
"type" : "Flat",
"name" : "Hindwoods Maiden Stakes (Div I)",
"run_befor" : 11,
"finish" : null,
"channel" : "ATR",
},
"track" : {
"fences" : 0,
"omitted" : 0,
"hdles" : 0,
"name" : "Lingfield",
"country" : "GB",
"type" : "",
"going" : "Good"
}
}
I'm trying to update it
#result value
{
"race":{
"run_after":"10",
"finish":{
"time":152.34,
"slow":1,
"fast":0,
"gap":5.34
}
},
"track":{
"name":"Lingfield",
"country":"GB",
"type":"",
"going":"Good",
"fences":0,
"omitted":0,
"hdles":0
}
}
Card.where(_id:100000001).update(#result)
When I update the collection of all data is deleted and inserted new
If do set() same
How do to upgrade the existing collection records are updated and not existing added?

Resources