Mongodb high CPU - many slow queries on special virtual collection db.$cmd - performance

In diagnosing a high CPU mongodb, we found many slow (6-7 secs) queries. All of those are related to "ns" : "mydb.$cmd".
Slow query entry look like below :
{
"_id" : ObjectId("5571b739f65f7e64bb806362"),
"op" : "command",
"ns" : "mydb.$cmd",
"command" : {
"aggregate" : "MyCollection",
"pipeline" : [
{
"$mergeCursors" : [
{
"host" : "abc:27005",
"id" : NumberLong(82775337156)
}
]
}
]
},
"keyUpdates" : 0,
"numYield" : 0,
"lockStats" : {
"timeLockedMicros" : {
"r" : NumberLong(12),
"w" : NumberLong(0)
},
"timeAcquiringMicros" : {
"r" : NumberLong(2),
"w" : NumberLong(2680)
}
},
"responseLength" : 12312,
"millis" : 6142,
"execStats" : {},
"ts" : ISODate("2015-06-05T12:35:40.801Z"),
"client" : "1.1.1.1",
"allUsers" : [],
"user" : ""
}
We are not sure what part of code causing these queries. How shall we proceed to find / debug what queries from application causing these $cmd slow queries ?

Those logs are actually the queries issued when running a command against the specified database (mydb in your case). This is therefore just some aggregation command being run against your MongoDB.
If your application is not doing this directly, it would appear (as documented in http://dbattish.tumblr.com/post/108652372056/joins-in-mongodb) that the $mergecursors variant is used from v2.6 to merge queries across shards.

My test shows that MongoDB uses always ~90-100% CPU when it deals with concurrent requests. This beacause I move to MySQL. My app with the thame simple queries work 3x faster with MySQL and i uses much less CPU. I will create an artciel soon with full testing. For now, just look to CPU usage of MongoDB and MariaDB for queries with X=5, 10, 25, 50, 100, 500, 1000 concurrent connections.
siege -b -cX -t1M url
As I realized, high CPU usage doesn't related to hight CPU usage. I mean, even very simple queries with concurrent requests make MongoDB to use 100% CPU.
All tests with 1vCPU and 1Gb memory and connection pool size 10
MongoDB
MySQL
I did many tests with different configurations (4vCPU, 6G Memory) and always MongoDB was use more CPU then MySQL. What you can try with MongoDB is:
Change connection loop size. I hope you don't open connection per query.
Are you using Mongoose? Try with Native Nodejs MYSQL Drivers - it much faster.
I very disappointed from MongodDB for reading data. Not only that MySQL uses much less CPU, it was always at least 3x faster!

Related

Elastic Cloud Circuit_breaking_exception

We recently upgraded our elastic cloud deployment from 6.8.5 to 7.9
After the upgrade we are seeing the following error time to time.
{
"error" : {
"root_cause" : [
{
"type" : "circuit_breaking_exception",
"reason" : "[parent] Data too large, data for [<http_request>] would be [416906520/397.5mb], which is larger than the limit of [408420352/389.5mb], real usage: [416906520/397.5mb], new bytes reserved: [0/0b], usages [request=0/0b, fielddata=32399/31.6kb, in_flight_requests=0/0b, model_inference=0/0b, accounting=4714192/4.4mb]",
"bytes_wanted" : 416906520,
"bytes_limit" : 408420352,
"durability" : "PERMANENT"
}
],
"type" : "circuit_breaking_exception",
"reason" : "[parent] Data too large, data for [<http_request>] would be [416906520/397.5mb], which is larger than the limit of [408420352/389.5mb], real usage: [416906520/397.5mb], new bytes reserved: [0/0b], usages [request=0/0b, fielddata=32399/31.6kb, in_flight_requests=0/0b, model_inference=0/0b, accounting=4714192/4.4mb]",
"bytes_wanted" : 416906520,
"bytes_limit" : 408420352,
"durability" : "PERMANENT"
},
"status" : 429
}
This deployment consists of only one node with 1G memory. We would like to know the cause of this error. Is it due to the upgrade?
Thank you.
First, the circuit breaker is a protection that some request doesn't push your cluster over the limit of what it can handle — this is killing a single request rather than (potentially) the entire cluster. Also note that this HTTP request alone isn't too large, but it trips the parent circuit breaker — so this request on top of everything else would be too much.
The initial circuit breaker was already added in 6.2.0, but was tightened down further in 7.0.0. I assume that's the reason why you are seeing this (more frequently) now.
You could change the indices.breaker.total.limit, but this isn't a magic switch to get more out of your cluster. 1GB of memory might just not be enough for what you are trying to do.

HDFS Visulization of block distribution

I'm trying to create a visulaization of the HDFS block distribution of a cluster.
I plan to create this using Tableau but was wondering what type of visualizations would be able to give you an idea of what nodes need re-balancing, and also an efficient way to get the server log data into tableau?
Before investing too much time in this, you might want to take a look at Twitter's open source HDFS-DU project. This provides a view of utilization based on paths within the file system rather than DataNodes within the cluster, but perhaps that's still helpful for your requirements.
If the goal is just to identify nodes in need of rebalancing, then this information is already accessible on the NameNode web UI "Datanodes" tab. You could also run hdfs dfsadmin -report to get utilization stats for each node in a script.
If none of the above meets your requirements, and you need to proceed with integrating the information into an external reporting tool like Tableau, then a helpful integration point might be the JMX metrics exposed via HTTP on the NameNode. See below for an example curl command that queries some of this information from the NameNode. Note in particular the LiveNodes section, which contains capacity information about each DataNode.
Some additional information about these metrics is available in the Apache Hadoop Metrics documentation.
> curl 'http://127.0.0.1:9870/jmx?qry=Hadoop:service=NameNode,name=NameNodeInfo'
{
"beans" : [ {
"name" : "Hadoop:service=NameNode,name=NameNodeInfo",
"modelerType" : "org.apache.hadoop.hdfs.server.namenode.FSNamesystem",
"Threads" : 46,
"Version" : "3.0.0-alpha2-SNAPSHOT, rdf497b3a739714c567c9c2322608f0659da20cc4",
"Used" : 5263360,
"Free" : 884636377088,
"Safemode" : "",
"NonDfsUsedSpace" : 114431086592,
"PercentUsed" : 5.266863E-4,
"BlockPoolUsedSpace" : 5263360,
"PercentBlockPoolUsed" : 5.266863E-4,
"PercentRemaining" : 88.52252,
"CacheCapacity" : 0,
"CacheUsed" : 0,
"TotalBlocks" : 50,
"NumberOfMissingBlocks" : 0,
"NumberOfMissingBlocksWithReplicationFactorOne" : 0,
"LiveNodes" : "{\"192.168.0.117:9866\":{\"infoAddr\":\"127.0.0.1:9864\",\"infoSecureAddr\":\"127.0.0.1:0\",\"xferaddr\":\"127.0.0.1:9866\",\"lastContact\":2,\"usedSpace\":5263360,\"adminState\":\"In Service\",\"nonDfsUsedSpace\":114431086592,\"capacity\":999334871040,\"numBlocks\":50,\"version\":\"3.0.0-alpha2-SNAPSHOT\",\"used\":5263360,\"remaining\":884636377088,\"blockScheduled\":0,\"blockPoolUsed\":5263360,\"blockPoolUsedPercent\":5.266863E-4,\"volfails\":0}}",
"DeadNodes" : "{}",
"DecomNodes" : "{}",
"BlockPoolId" : "BP-1429209999-10.195.15.240-1484933797029",
"NameDirStatuses" : "{\"active\":{\"/Users/naurc001/hadoop-deploy-trunk/data/dfs/name\":\"IMAGE_AND_EDITS\"},\"failed\":{}}",
"NodeUsage" : "{\"nodeUsage\":{\"min\":\"0.00%\",\"median\":\"0.00%\",\"max\":\"0.00%\",\"stdDev\":\"0.00%\"}}",
"NameJournalStatus" : "[{\"manager\":\"FileJournalManager(root=/Users/naurc001/hadoop-deploy-trunk/data/dfs/name)\",\"stream\":\"EditLogFileOutputStream(/Users/naurc001/hadoop-deploy-trunk/data/dfs/name/current/edits_inprogress_0000000000000000862)\",\"disabled\":\"false\",\"required\":\"false\"}]",
"JournalTransactionInfo" : "{\"MostRecentCheckpointTxId\":\"861\",\"LastAppliedOrWrittenTxId\":\"862\"}",
"NNStartedTimeInMillis" : 1485715900031,
"CompileInfo" : "2017-01-03T21:06Z by naurc001 from trunk",
"CorruptFiles" : "[]",
"NumberOfSnapshottableDirs" : 0,
"DistinctVersionCount" : 1,
"DistinctVersions" : [ {
"key" : "3.0.0-alpha2-SNAPSHOT",
"value" : 1
} ],
"SoftwareVersion" : "3.0.0-alpha2-SNAPSHOT",
"NameDirSize" : "{\"/Users/naurc001/hadoop-deploy-trunk/data/dfs/name\":2112351}",
"RollingUpgradeStatus" : null,
"ClusterId" : "CID-4526ea43-52e6-4b3f-9ddf-5fd4412e322e",
"UpgradeFinalized" : true,
"Total" : 999334871040
} ]
}

Elasticsearch indexing is very slow

I have a Titan database with Cassandra storage backend, and I am trying to create a mixed index based on two property keys.
I am able to register the Index using following commands:
graph=TitanFactory.open(config);
graph.tx().rollback()
m = graph.openManagement();
m.buildIndex("titleBodyMixed", Vertex.class).addKey(m.getPropertyKey("title")).addKey(m.getPropertyKey("body")).buildMixedIndex("search");
m.commit();
m.awaitGraphIndexStatus(graph, 'titleBodyMixed').status(SchemaStatus.REGISTERED).timeout(3, java.time.temporal.ChronoUnit.MINUTES).call();
And when I am checking, the Index is successfully registered after a few seconds. At next step, I try to reindex the database using the following commands:
m = graph.openManagement();
m.updateIndex(m.getGraphIndex('titleBodyMixed'), SchemaAction.REINDEX).get();
However, updateIndex command is not finishing, (After 12 hours).
I have about 300k data entry in the database and each data entry has one Title and one Body to index.
My question is that how can I speed up the indexing?
When I am using top command I see that my CPU is not saturated by indexing processes:
My Titan config file is as bellow:
config =new BaseConfiguration();
config.setProperty("storage.backend","cassandra");
config.setProperty("storage.hostname", "127.0.0.1");
config.setProperty("storage.cassandra.keyspace", "smartgraph");
config.setProperty("index.search.elasticsearch.interface", "NODE");
config.setProperty("index.search.backend", "elasticsearch");
The following is showing elasticsearch service properties:
curl -X GET 'http://localhost:9200'
{
"status" : 200,
"name" : "Ms. Marvel",
"cluster_name" : "elasticsearch",
"version" : {
"number" : "1.7.2",
"build_hash" : "e43676b1385b8125d647f593f7202acbd816e8ec",
"build_timestamp" : "2015-09-14T09:49:53Z",
"build_snapshot" : false,
"lucene_version" : "4.10.4"
},
"tagline" : "You Know, for Search"
}
The idea is, the index reindexing process will not start unless all sessions are closed. You most probably have sessions open with the database. Therefore, the reindex job is never triggered.
With this Gremlin script, you could close all sessions. You should see that the indexing will take place afterwards.
Will that help?

MongoDb's Sharding does not improve application in a Lab setup

I'm currently developing a mobile application powered by a Mongo database, and however everything is working fine right now, we want to add Sharding to be prepared for the future.
In order to test this, we've created a lab environment (running in Hyper-V) to test the various scenario's:
The following servers have been created:
Ubuntu Server 14.04.3 Non-Sharding (Database Server) (256 MB Ram / Limit to 10% CPU).
Ubuntu Server 14.04.3 Sharding (Configuration Server) (256 MB Ram / Limit to 10% CPU).
Ubuntu Server 14.04.3 Sharding (Query Router Server) (256 MB Ram / Limit to 10% CPU).
Ubuntu Server 14.04.3 Sharding (Database Server 01) (256 MB Ram / Limit to 10% CPU).
Ubuntu Server 14.04.3 Sharding (Database Server 02) (256 MB Ram / Limit to 10% CPU).
A small console application have been created in C# to be able to measure the time to perform an insert.
This console application does import 10.000 persons with the following properties:
- Name
- Firstname
- Full Name
- Date Of Birth
- Id
All 10.000 records differs only by '_id', all the other fields are the same for all the records.
It's important to note that every test is exactely run 3 times.
After every test, the database is removed so the system is clean again.
Find the results of the test below:
Insert 10.000 records without sharding
Writing 10.000 records | Non-Sharding environment - Full Disk IO #1: 14 Seconds.
Writing 10.000 records | Non-Sharding environment - Full Disk IO #2: 14 Seconds.
Writing 10.000 records | Non-Sharding environment - Full Disk IO #3: 12 Seconds.
Insert 10.000 records with single database shard
Note: Sharding key has been set to hashed _id field.
See Json below for (partial) sharding information:
shards:
{ "_id" : "shard0000", "host" : "192.168.137.12:27017" }
databases:
{ "_id" : "DemoDatabase", "primary" : "shard0000", "partitioned" : true }
DemoDatabase.persons
shard key: { "_id" : "hashed" }
unique: false
balancing: true
chunks:
shard0000 2
{ "_id" : { "$minKey" : 1 } } -->> { "_id" : NumberLong(0) } on : shard0000 Timestamp(1, 1)
{ "_id" : NumberLong(0) } -->> { "_id" : { "$maxKey" : 1 } } on : shard0000 Timestamp(1, 2)
Results:
Writing 10.000 records | Single Sharding environment - Full Disk IO #1: 1 Minute, 59 Seconds.
Writing 10.000 records | Single Sharding environment - Full Disk IO #2: 1 Minute, 51 Seconds.
Writing 10.000 records | Single Sharding environment - Full Disk IO #3: 1 Minute, 52 Seconds.
Insert 10.000 records with double database shard
Note: Sharding key has been set to hashed _id field.
See Json below for (partial) sharding information:
shards:
{ "_id" : "shard0000", "host" : "192.168.137.12:27017" }
{ "_id" : "shard0001", "host" : "192.168.137.13:27017" }
databases:
{ "_id" : "DemoDatabase", "primary" : "shard0000", "partitioned" : true }
DemoDatabase.persons
shard key: { "_id" : "hashed" }
unique: false
balancing: true
chunks:
shard0000 2
{ "_id" : { "$minKey" : 1 } } -->> { "_id" : NumberLong("-4611686018427387902") } on : shard0000 Timestamp(2, 2)
{ "_id" : NumberLong("-4611686018427387902") } -->> { "_id" : NumberLong(0) } on : shard0000 Timestamp(2, 3)
{ "_id" : NumberLong(0) } -->> { "_id" : NumberLong("4611686018427387902") } on : shard0001 Timestamp(2, 4)
{ "_id" : NumberLong("4611686018427387902") } -->> { "_id" : { "$maxKey" : 1 } } on : shard0001 Timestamp(2, 5)
Results:
Writing 10.000 records | Single Sharding environment - Full Disk IO #1: 49 Seconds.
Writing 10.000 records | Single Sharding environment - Full Disk IO #2: 53 Seconds.
Writing 10.000 records | Single Sharding environment - Full Disk IO #3: 54 Seconds.
According to the tests executed above, sharding does work, the more shards that I add, the better the performance.
However, I don't understand why I'm facing such a huge performance drop when working with shards rather than using a single server.
I need to blazing fast reading and writing s I tought that sharding would be the solution, but it seems that I'm missing something here.
Anyone why can point me in the right direction?
Kind regards
The layers between the routing server and config server, routing server and data nodes add latency.
If you have 1ms ping * 10k inserts, you have an 10 seconds of latency that does not appear in the unsharded setup.
Depending on your configured level of write-concern (if you configured any level of write-acknowledgement), you could have an additional 10 seconds to your benchmarks on the sharded environment due to blocking until an acknowledgement is received from the data node.
If your write-concern is set to acknowledge and you have replica nodes, then you also have to wait for the write to propagate to your replica nodes, adding additional network latency. (You don't appear to have replica nodes though). And depending on your network topology, write-concern could add multiple layers of network latency if you use the default setting to allow chained replication (secondaries sync from other secondaries). https://docs.mongodb.org/manual/tutorial/manage-chained-replication/. If you have additional indexes and write concern, each replica node will have to write that index before returning a write-acknowledgement (it is possible to disable indexes on replica nodes though)
Without sharding and without replication (but with write-acknowledgement), while your inserts would still block on the insert, there is no additional latency due to the network layer.
Hashing the _id field also has a cost that accumulates to maybe a few seconds total for 10k. You can use an _id field with a high degree of randomness to avoid hashing, but I don't think this affects performance much.

why not increase performance by setting refresh interval in elasticsearch

I watched the site memo about increasing indexing performance.
this is site link
This link instruct me how to increase performance. but, it did not improve indexing speed in elasticsearch when I used to bulk python api with elasticsearch-py.
even all configuration change didn't affect bulk indexing performance.
i used parallel process or thread. max avg 30000 indexing per second.
what did i fault?
master node : 1
data node : 5 include master node
CPU : Intel(R) Xeon(R) CPU E5645 # 2.40GHz
RAM : 32G
ES_HEAPSIZE : 10G
Thanks
It actually increase performance dramatically (more than 50% on my side). You just need to disable refresh_interval (enable it again when you finish indexing data)
curl -XPUT "http://localhost:9200/$INDEX_NAME/_settings" -d '{ "index" : { "refresh_interval" : "-1" }}'
#index data......
curl -XPUT "http://localhost:9200/$INDEX_NAME/_settings" -d '{ "index" : { "refresh_interval" : "1s" }}'

Resources