How to allocate resource to container while using Spring Yarn? - spring

I am tring the Spring Yarn example on [github][1] which is build by gradle. And I sucessful run the custom-amservice example on yarn.
But I don't know how to allocate special resource to the containers. I tried to override the onContainerAllocated and onContainerLaunched method in class StaticEventingAppmaster at my CustomAppmaster and allocate the resource, just like below.
#Override
protected void onContainerAllocated(Container container) {
//==allocate resource
Resource resource = new ResourcePBImpl();
resource.setMemory(1300);
resource.setVirtualCores(7);
container.setResource(resource);
//====
if (getMonitor() instanceof ContainerAware) {
((ContainerAware)getMonitor()).onContainer(Arrays.asList(container));
}
getLauncher().launchContainer(container, getCommands());
}
#Override
protected void onContainerLaunched(Container container) {
//==allocate resource
Resource resource = new ResourcePBImpl();
resource.setMemory(1300);
resource.setVirtualCores(7);
container.setResource(resource);
//====
if (getMonitor() instanceof ContainerAware) {
((ContainerAware)getMonitor()).onContainer(Arrays.asList(container));
}
}
and in the log it seems works:
2014-12-30 20:06:35,524 DEBUG [AbstractPollingAllocator] - response has 1 new containers
2014-12-30 20:06:35,525 DEBUG [AbstractPollingAllocator] - new container: container_1419934738198_0004_01_000003
//// this line shows the memory is 1300 and cpu core is 7
2014-12-30 20:06:35,525 DEBUG [DefaultContainerMonitor] - Reporting container=Container: [ContainerId: container_1419934738198_0004_01_000003, NodeId: yarn-master1:57799, NodeHttpAddress: yarn-master1:8042, Resource: <memory:1300, vCores:7>, Priority: 0, Token: Token { kind: ContainerToken, service: 192.168.0.170:57799 }, ]
2014-12-30 20:06:35,526 DEBUG [DefaultContainerMonitor] - State after reportContainer: DefaultContainerMonitor [allocated=[container_1419934738198_0004_01_000003,], running=[container_1419934738198_0004_01_000002,], completed=[], failed=[]]
//// this line shows the memory is 1300 and cpu core is 7
2014-12-30 20:06:35,526 DEBUG [DefaultContainerLauncher] - Launching container: Container: [ContainerId: container_1419934738198_0004_01_000003, NodeId: yarn-master1:57799, NodeHttpAddress: yarn-master1:8042, Resource: <memory:1300, vCores:7>, Priority: 0, Token: Token { kind: ContainerToken, service: 192.168.0.170:57799 }, ] with commands $JAVA_HOME/bin/java,org.springframework.yarn.container.CommandLineContainerRunner,container-context.xml,yarnContainer,1><LOG_DIR>/Container.stdout,2><LOG_DIR>/Container.stderr
However, when I try to run an application which resource out of the limit, its log shows that the memory remains 1GB instead of 1300, just see below:
2014-12-30 20:07:05,929 DEBUG [AbstractPollingAllocator] - response has 1 completed containers
//The Same container was stopped because it beyond the limits.
2014-12-30 20:07:05,932 DEBUG [AbstractPollingAllocator] - completed container: container_1419934738198_0004_01_000003 with status=ContainerStatus: [ContainerId: container_1419934738198_0004_01_000003, State: COMPLETE, Diagnostics: Container [pid=10587,containerID=container_1419934738198_0004_01_000003] is running beyond virtual memory limits. Current usage: 86.6 MB of 1 GB physical memory used; 31.8 GB of 2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1419934738198_0004_01_000003 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 10587 32315 10587 10587 (bash) 2 3 12652544 353 /bin/bash -c /home/novelbio/software/jdk//bin/java org.springframework.yarn.container.CommandLineContainerRunner container-context.xml yarnContainer 1>/home/novelbio/software/hadoop/logs/userlogs/application_1419934738198_0004/container_1419934738198_0004_01_000003/Container.stdout 2>/home/novelbio/software/hadoop/logs/userlogs/application_1419934738198_0004/container_1419934738198_0004_01_000003/Container.stderr
|- 10761 10587 10587 10587 (java) 108 10 34135896064 21811 /home/novelbio/software/jdk//bin/java org.springframework.yarn.container.CommandLineContainerRunner container-context.xml yarnContainer
, ExitStatus: 0, ]
The key point is, in the log :. Current usage: 86.6 MB of 1 GB physical memory used instead of 1.3GB.
So I think my method didn't take effect. Could any body tells me how to allocate resource correctly?

This is one of a problematic areas in YARN which I believe will eventually get better when more and more non-MR apps are used in YARN. I believe your settings are correctly applied but a little strange behaviour from YARN is causing these problems. Currently there is very little we can do from an application point of view because most of the mem settings are enforced in YARN itself and requests from apps are just "requests".
Spring XD on YARN rely in this same stuff and it's worth to check what we wrote into its docs: https://github.com/spring-projects/spring-xd/wiki/Running-on-YARN. (see section Configuring YARN memory reservations).
I'll try to make sure that this same info also goes to our Spring Hadoop and Spring YARN ref docs.

Related

Possible reasons for groovy program running as kubernetes job dumping threads during execution

I have a simple groovy script that leverages the GPars library's withPool functionality to launch HTTP GET requests to two internal API endpoints in parallel.
The script runs fine locally, both directly as well as a docker container.
When I deploy it as a Kubernetes Job (in our internal EKS cluster: 1.20), it runs there as well, but the moment it hits the first withPool call, I see a giant thread dump, but the execution continues, and completes successfully.
NOTE: Containers in our cluster run with the following pod security context:
securityContext:
fsGroup: 2000
runAsNonRoot: true
runAsUser: 1000
Environment
# From the k8s job container
groovy#app-271df1d7-15848624-mzhhj:/app$ groovy --version
WARNING: Using incubator modules: jdk.incubator.foreign, jdk.incubator.vector
Groovy Version: 4.0.0 JVM: 17.0.2 Vendor: Eclipse Adoptium OS: Linux
groovy#app-271df1d7-15848624-mzhhj:/app$ ps -ef
UID PID PPID C STIME TTY TIME CMD
groovy 1 0 0 21:04 ? 00:00:00 /bin/bash bin/run-script.sh
groovy 12 1 42 21:04 ? 00:00:17 /opt/java/openjdk/bin/java -Xms3g -Xmx3g --add-modules=ALL-SYSTEM -classpath /opt/groovy/lib/groovy-4.0.0.jar -Dscript.name=/usr/bin/groovy -Dprogram.name=groovy -Dgroovy.starter.conf=/opt/groovy/conf/groovy-starter.conf -Dgroovy.home=/opt/groovy -Dtools.jar=/opt/java/openjdk/lib/tools.jar org.codehaus.groovy.tools.GroovyStarter --main groovy.ui.GroovyMain --conf /opt/groovy/conf/groovy-starter.conf --classpath . /tmp/script.groovy
groovy 116 0 0 21:05 pts/0 00:00:00 bash
groovy 160 116 0 21:05 pts/0 00:00:00 ps -ef
Script (relevant parts)
#Grab('org.codehaus.gpars:gpars:1.2.1')
import static groovyx.gpars.GParsPool.withPool
import groovy.json.JsonSlurper
final def jsl = new JsonSlurper()
//...
while (!(nextBatch = getBatch(batchSize)).isEmpty()) {
def devThread = Thread.start {
withPool(poolSize) {
nextBatch.eachParallel { kw ->
String url = dev + "&" + "query=$kw"
try {
def response = jsl.parseText(url.toURL().getText(connectTimeout: 10.seconds, readTimeout: 10.seconds,
useCaches: true, allowUserInteraction: false))
devResponses[kw] = response
} catch (e) {
println("\tFailed to fetch: $url | error: $e")
}
}
}
}
def stgThread = Thread.start {
withPool(poolSize) {
nextBatch.eachParallel { kw ->
String url = stg + "&" + "query=$kw"
try {
def response = jsl.parseText(url.toURL().getText(connectTimeout: 10.seconds, readTimeout: 10.seconds,
useCaches: true, allowUserInteraction: false))
stgResponses[kw] = response
} catch (e) {
println("\tFailed to fetch: $url | error: $e")
}
}
}
}
devThread.join()
stgThread.join()
}
Dockerfile
FROM groovy:4.0.0-jdk17 as builder
USER root
RUN apt-get update && apt-get install -yq bash curl wget jq
WORKDIR /app
COPY bin /app/bin
RUN chmod +x /app/bin/*
USER groovy
ENTRYPOINT ["/bin/bash"]
CMD ["bin/run-script.sh"]
The bin/run-script.sh simply downloads the above groovy script at runtime and executes it.
wget "$GROOVY_SCRIPT" -O "$LOCAL_FILE"
...
groovy "$LOCAL_FILE"
As soon as the execution hits the first call to withPool(poolSize), there's a giant thread dump, but execution continues.
I'm trying to figure out what could be causing this behavior. Any ideas 🤷🏽‍♂️?
Thread dump
For posterity, answering my own question here.
The issue turned out to be this log4j2 JVM hot-patch that we're currently leveraging to fix the recent log4j2 vulnerability. This agent (running as a DaemonSet) patches all running JVMs in all our k8s clusters.
This, somehow, causes my OpenJDK 17 based app to thread dump. I found the same issue with an ElasticSearch 8.1.0 deployment as well (also uses a pre-packaged OpenJDK 17). This one is a service, so I could see a thread dump happening pretty much every half hour! Interestingly, there are other JVM services (and some SOLR 8 deployments) that don't have this issue 🤷🏽‍♂️.
Anyway, I worked with our devops team to temporarily exclude the node that deployment was running on, and lo and behold, the thread dumps disappeared!
Balance in the universe has been restored 🧘🏻‍♂️.

Nomad Job - Failed to place all allocations

I’m trying to deploy an AWS EBS volume via nomad but I’m this below error. How do I resolve it?
$ nomad job plan -var-file bambootest.vars bamboo2.nomad
+/- Job: “bamboo2”
+/- Stop: “true” => “false”
+/- Task Group: “main” (1 create)
Volume {
AccessMode: “single-node-writer”
AttachmentMode: “file-system”
Name: “bambootest”
PerAlloc: “false”
ReadOnly: “false”
Source: “bambootest”
Type: “csi”
}
Task: “web”
Scheduler dry-run:
WARNING: Failed to place all allocations.
Task Group “main” (failed to place 1 allocation):
Class “system”: 3 nodes excluded by filter
Class “svt”: 2 nodes excluded by filter
Class “devtools”: 2 nodes excluded by filter
Class “bambootest”: 2 nodes excluded by filter
Class “ambt”: 2 nodes excluded by filter
Constraint “${meta.namespace} = bambootest”: 9 nodes excluded by filter
Constraint “missing CSI Volume bambootest”: 2 nodes excluded by filter
Below is an excerpt of the volume block that seems to be the problem.
group main {
count = 1
volume "bambootest" {
type = "csi"
source = "bambootest"
read_only = false
access_mode = "single-node-writer"
attachment_mode = "file-system"
}
task web {
driver = "docker"

Writing Parquet file in standalone mode works..multiple worker mode fails

In Spark, version 1.6.1 (code is in Scala 2.10), I am trying to write a data frame to a Parquet file:
import sc.implicits._
val triples = file.map(p => _parse(p, " ", true)).toDF()
triples.write.mode(SaveMode.Overwrite).parquet("hdfs://some.external.ip.address:9000/tmp/table.parquet")
When I do it in development mode, everything works fine. It also works fine if I setup a master and one worker in standalone mode in a docker environment (separate docker containers) on the same machine. It fails when I try to execute it on a cluster (1 master, 5 workers). If I set it up local on the master it also works.
When I try to execute it, I get following stacktrace:
{
"duration": "18.716 secs",
"classPath": "LDFSparkLoaderJobTest2",
"startTime": "2016-07-18T11:41:03.299Z",
"context": "sql-context",
"result": {
"errorClass": "org.apache.spark.SparkException",
"cause": "Job aborted due to stage failure: Task 1 in stage 0.0 failed 4 times, most recent failure: Lost task 1.3 in stage 0.0 (TID 6, curry-n3): java.lang.NullPointerException
at org.apache.parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:147)
at org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:113)
at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:112)
at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetRelation.scala:101)
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.abortTask$1(WriterContainer.scala:294)
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:271)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:150)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:150)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)\n\nDriver stacktrace:",
"stack":[
"org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1431)",
"org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1419)",
"org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1418)",
"scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)",
"scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)",
"org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1418)",
"org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)",
"org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)",
"scala.Option.foreach(Option.scala:236)",
"org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:799)",
"org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1640)",
"org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1599)",
"org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1588)",
"org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)",
"org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620)",
"org.apache.spark.SparkContext.runJob(SparkContext.scala:1832)",
"org.apache.spark.SparkContext.runJob(SparkContext.scala:1845)",
"org.apache.spark.SparkContext.runJob(SparkContext.scala:1922)",
"org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1.apply$mcV$sp(InsertIntoHadoopFsRelation.scala:150)",
"org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1.apply(InsertIntoHadoopFsRelation.scala:108)",
"org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1.apply(InsertIntoHadoopFsRelation.scala:108)",
"org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:56)",
"org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation.run(InsertIntoHadoopFsRelation.scala:108)",
"org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58)",
"org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56)",
"org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70)",
"org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)",
"org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)",
"org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)",
"org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)",
"org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:55)",
"org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:55)",
"org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:256)",
"org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:148)",
"org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:139)",
"org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:334)",
"LDFSparkLoaderJobTest2$.readFile(SparkLoaderJob.scala:55)",
"LDFSparkLoaderJobTest2$.runJob(SparkLoaderJob.scala:48)",
"LDFSparkLoaderJobTest2$.runJob(SparkLoaderJob.scala:18)",
"spark.jobserver.JobManagerActor$$anonfun$spark$jobserver$JobManagerActor$$getJobFuture$4.apply(JobManagerActor.scala:268)",
"scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)",
"scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)",
"java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)",
"java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)",
"java.lang.Thread.run(Thread.java:745)"
],
"causingClass": "org.apache.spark.SparkException",
"message": "Job aborted."
},
"status": "ERROR",
"jobId": "54ad3056-3aaa-415f-8352-ca8c57e02fe9"
}
Notes:
The job is submitted via the Spark Jobserver.
The file that needs to be converted to a Parquet file is 15.1 MB in size.
Question:
Is there something I am doing wrong (I followed the docs)
Or is there another way I can create the Parquet file, so all my workers have access to it?
In your stand alone setup only one worker is working with ParquetRecordWriter. so it worked fine.
In case of real test i.e. cluster (1 master, 5 workers). with ParquetRecordWriter it will fail since you are concurrently writing with multiple workers...
pls try below.
import sc.implicits._
val triples = file.map(p => _parse(p, " ", true)).toDF()
triples.write.mode(SaveMode.Append).parquet("hdfs://some.external.ip.address:9000/tmp/table.parquet")
pls. see SaveMode.Append "append" When saving a DataFrame to a data source, if data/table already exists, contents of the DataFrame are expected to be appended to existing data.
I had not exactly the same, but similar issues writing dataframes to parquet files in cluster mode. Those problems disappeared when deleteing the file, just before writing, using this convenience function 'write(..)' :
import org.apache.hadoop.fs.FileSystem
import org.apache.hadoop.fs.Path
..
def main(arg: Array[String]) {
..
val fs = FileSystem.get(sc.hadoopConfiguration)
..
def write(df:DataFrame, fn:String ) = {
val op1=s"hdfs:///user/you/$fn"
fs.delete(new Path(op1))
df.write.parquet(op1)
}
Give it a go, tell me if it works for you...

MongoDb's Sharding does not improve application in a Lab setup

I'm currently developing a mobile application powered by a Mongo database, and however everything is working fine right now, we want to add Sharding to be prepared for the future.
In order to test this, we've created a lab environment (running in Hyper-V) to test the various scenario's:
The following servers have been created:
Ubuntu Server 14.04.3 Non-Sharding (Database Server) (256 MB Ram / Limit to 10% CPU).
Ubuntu Server 14.04.3 Sharding (Configuration Server) (256 MB Ram / Limit to 10% CPU).
Ubuntu Server 14.04.3 Sharding (Query Router Server) (256 MB Ram / Limit to 10% CPU).
Ubuntu Server 14.04.3 Sharding (Database Server 01) (256 MB Ram / Limit to 10% CPU).
Ubuntu Server 14.04.3 Sharding (Database Server 02) (256 MB Ram / Limit to 10% CPU).
A small console application have been created in C# to be able to measure the time to perform an insert.
This console application does import 10.000 persons with the following properties:
- Name
- Firstname
- Full Name
- Date Of Birth
- Id
All 10.000 records differs only by '_id', all the other fields are the same for all the records.
It's important to note that every test is exactely run 3 times.
After every test, the database is removed so the system is clean again.
Find the results of the test below:
Insert 10.000 records without sharding
Writing 10.000 records | Non-Sharding environment - Full Disk IO #1: 14 Seconds.
Writing 10.000 records | Non-Sharding environment - Full Disk IO #2: 14 Seconds.
Writing 10.000 records | Non-Sharding environment - Full Disk IO #3: 12 Seconds.
Insert 10.000 records with single database shard
Note: Sharding key has been set to hashed _id field.
See Json below for (partial) sharding information:
shards:
{ "_id" : "shard0000", "host" : "192.168.137.12:27017" }
databases:
{ "_id" : "DemoDatabase", "primary" : "shard0000", "partitioned" : true }
DemoDatabase.persons
shard key: { "_id" : "hashed" }
unique: false
balancing: true
chunks:
shard0000 2
{ "_id" : { "$minKey" : 1 } } -->> { "_id" : NumberLong(0) } on : shard0000 Timestamp(1, 1)
{ "_id" : NumberLong(0) } -->> { "_id" : { "$maxKey" : 1 } } on : shard0000 Timestamp(1, 2)
Results:
Writing 10.000 records | Single Sharding environment - Full Disk IO #1: 1 Minute, 59 Seconds.
Writing 10.000 records | Single Sharding environment - Full Disk IO #2: 1 Minute, 51 Seconds.
Writing 10.000 records | Single Sharding environment - Full Disk IO #3: 1 Minute, 52 Seconds.
Insert 10.000 records with double database shard
Note: Sharding key has been set to hashed _id field.
See Json below for (partial) sharding information:
shards:
{ "_id" : "shard0000", "host" : "192.168.137.12:27017" }
{ "_id" : "shard0001", "host" : "192.168.137.13:27017" }
databases:
{ "_id" : "DemoDatabase", "primary" : "shard0000", "partitioned" : true }
DemoDatabase.persons
shard key: { "_id" : "hashed" }
unique: false
balancing: true
chunks:
shard0000 2
{ "_id" : { "$minKey" : 1 } } -->> { "_id" : NumberLong("-4611686018427387902") } on : shard0000 Timestamp(2, 2)
{ "_id" : NumberLong("-4611686018427387902") } -->> { "_id" : NumberLong(0) } on : shard0000 Timestamp(2, 3)
{ "_id" : NumberLong(0) } -->> { "_id" : NumberLong("4611686018427387902") } on : shard0001 Timestamp(2, 4)
{ "_id" : NumberLong("4611686018427387902") } -->> { "_id" : { "$maxKey" : 1 } } on : shard0001 Timestamp(2, 5)
Results:
Writing 10.000 records | Single Sharding environment - Full Disk IO #1: 49 Seconds.
Writing 10.000 records | Single Sharding environment - Full Disk IO #2: 53 Seconds.
Writing 10.000 records | Single Sharding environment - Full Disk IO #3: 54 Seconds.
According to the tests executed above, sharding does work, the more shards that I add, the better the performance.
However, I don't understand why I'm facing such a huge performance drop when working with shards rather than using a single server.
I need to blazing fast reading and writing s I tought that sharding would be the solution, but it seems that I'm missing something here.
Anyone why can point me in the right direction?
Kind regards
The layers between the routing server and config server, routing server and data nodes add latency.
If you have 1ms ping * 10k inserts, you have an 10 seconds of latency that does not appear in the unsharded setup.
Depending on your configured level of write-concern (if you configured any level of write-acknowledgement), you could have an additional 10 seconds to your benchmarks on the sharded environment due to blocking until an acknowledgement is received from the data node.
If your write-concern is set to acknowledge and you have replica nodes, then you also have to wait for the write to propagate to your replica nodes, adding additional network latency. (You don't appear to have replica nodes though). And depending on your network topology, write-concern could add multiple layers of network latency if you use the default setting to allow chained replication (secondaries sync from other secondaries). https://docs.mongodb.org/manual/tutorial/manage-chained-replication/. If you have additional indexes and write concern, each replica node will have to write that index before returning a write-acknowledgement (it is possible to disable indexes on replica nodes though)
Without sharding and without replication (but with write-acknowledgement), while your inserts would still block on the insert, there is no additional latency due to the network layer.
Hashing the _id field also has a cost that accumulates to maybe a few seconds total for 10k. You can use an _id field with a high degree of randomness to avoid hashing, but I don't think this affects performance much.

MapR installation failing for single node cluster

I was referring quick installation guide for single node cluster. For this i used 20GB storage file for MaprFS but while on installation , it is giving ' Unable to find disks: /maprfs/storagefile' .
Here is my configuration file.
# Each Node section can specify nodes in the following format
# Hostname: disk1, disk2, disk3
# Specifying disks is optional. If not provided, the installer will use the values of 'disks' from the Defaults section
[Control_Nodes]
maprlocal.td.td.com: /maprfs/storagefile
#control-node2.mydomain: /dev/disk3, /dev/disk9
#control-node3.mydomain: /dev/sdb, /dev/sdc, /dev/sdd
[Data_Nodes]
#data-node1.mydomain
#data-node2.mydomain: /dev/sdb, /dev/sdc, /dev/sdd
#data-node3.mydomain: /dev/sdd
#data-node4.mydomain: /dev/sdb, /dev/sdd
[Client_Nodes]
#client1.mydomain
#client2.mydomain
#client3.mydomain
[Options]
MapReduce1 = true
YARN = true
HBase = true
MapR-DB = true
ControlNodesAsDataNodes = true
WirelevelSecurity = false
LocalRepo = false
[Defaults]
ClusterName = my.cluster.com
User = mapr
Group = mapr
Password = mapr
UID = 2000
GID = 2000
Disks = /maprfs/storagefile
StripeWidth = 3
ForceFormat = false
CoreRepoURL = http://package.mapr.com/releases
EcoRepoURL = http://package.mapr.com/releases/ecosystem-4.x
Version = 4.0.2
MetricsDBHost =
MetricsDBUser =
MetricsDBPassword =
MetricsDBSchema =
Below is the error that i am getting.
2015-04-16 08:18:03,659 callbacks 42 [INFO]: Running task: [Verify Pre-Requisites]
2015-04-16 08:18:03,661 callbacks 87 [ERROR]: maprlocal.td.td.com: Unable to find disks: /maprfs/storagefile from /maprfs/storagefile remove disks: /dev/sda,/dev/sda1,/dev/sda2,/dev/sda3 and retry
2015-04-16 08:18:03,662 callbacks 91 [ERROR]: failed: [maprlocal.td.td.com] => {"failed": true}
2015-04-16 08:18:03,667 installrunner 199 [ERROR]: Host: maprlocal.td.td.com has 1 failures
2015-04-16 08:18:03,668 common 203 [ERROR]: Control Nodes have failures. Please fix the failures and re-run the installation. For more information refer to the installer log at /opt/mapr-installer/var/mapr-installer.log
Please help me here.
Thanks
Shashi
Error is resolved by adding skip-check new option after install
/opt/mapr-installer/bin/install --skip-checks new

Resources