Dmgr console of development environment is very slow, we checked from all aspects but unable to find the exact reason.
Dev WebSphere is running in AIX server which has 20 GB of RAM initially, we even increased the RAM to 8 GB but still facing the slowness with 28 GB of RAM.
And we have 10 different JVMs running in 10 differnt clusters in Dev which shares the RAM like below
JVM1 1 Gb JVM2 2 JVM3 2 JVM4 1 JVM5 2 JVM1 1 JVM1 1 JVM1 1 JVM1 2 JVM1 2 DMGR 2 Nodeagent 256 MB
So total of 17.6 GB (of 28) is used for RAM, but still we facing slowness in DMGR while
1.) Navigating
2.) Giving Node Synchronisation
3.) Starting of the DMGR
4.) And we have 24 applications running in Dev with 4 to 5 applications has 330 MB of size deployed in some JVMs having 2GB of RAM (will it this could be one of the reason?)
What could be the possible reason for this dmgr slowness? Can anyone tell me
A low max JVM heap size on the dmgr JVM can cause the interactive bits of the console to act mysteriously slow.
You can change the heap size pretty easily:
http://www-01.ibm.com/support/docview.wss?uid=swg21329319
In the navigation panel, click System Administration > Deployment
Manager > Process definition.
Under Additional Properties, click
Java Virtual Machine. Type 1024 in the Maximum Heap Size field.
Save
the changes to the master repository.
Restart all servers, node
agents, and the deployment manager.
We removed hosts from the virtual hosts that are no longer used/not on the network anymore. This seems to have helped.
Related
We have a 4 node cluster with Hue 3.9.0 installed. Namenode has 24 GB of RAM and each DataNode has 20 GB. We have several jobs running consuming resources:
15/24 GB (NameNode)
14/20 GB (DataNode)
13/20 GB (DataNode)
6/20 GB (DataNode)
We also run queries in Impala and Hive. Time after time those queries consume all available RAM (on NameNode) which everytime causes Hue to crash. When it happens Cloudera Manager (CM) shows Hue health as bad (process status: "Bad : This role's process exited. This role is supposed to be started.") while all the other services such as HBase, HDFS, Impala, Hive and so on have good health. After restarting Hue service via CM it works fine again. How can we prevent Hue from crashing because of lack of RAM?
I think what I am looking for is a means (option) to reserve enough RAM space for Hue service, but all I could find so far were Impala configuration options set via Hue configuration tab (with our current values in brackets):
Impala Daemon Memory Limit (mem_limit):
Impala Daemon Default Group (1701 MB)
Impala Daemon Group 1 (2665 MB)
Java Heap Size of Impala Llama ApplicationMaster in Bytes (2 GB)
But anyway running a sequence of queries (parallelly or one after another) eventually consumes all available RAM. It seems that it doesn't free RAM after a query is done. I'd rather expect Impala and Hive to say that they don't have enough RAM to continue and not crash other services such as Hue in this case.
If I have 3 spark applications all using the same yarn cluster, how should I set
yarn.nodemanager.resource.cpu-vcores
in each of the 3 yarn-site.xml?
(each spark application is required to have it's own yarn-site.xml on the classpath)
Does this value even matter in the client yarn-site.xml's ?
If it does:
Let's say the cluster has 16 cores.
Should the value in each yarn-site.xml be 5 (for a total of 15 to leave 1 core for system processes) ? Or should I set each one to 15 ?
(Note: Cloudera indicates one core should be left for system processes here: http://blog.cloudera.com/blog/2015/03/how-to-tune-your-apache-spark-jobs-part-2/ however, they do not go into details of using multiple clients against the same cluster)
Assume Spark is running with yarn as the master, and running in cluster mode.
Are you talking about the server-side configuration for each YARN Node Manager? If so, it would typically be configured to be a little less than the number of CPU cores (or virtual cores if you have hyperthreading) on each node in the cluster. So if you have 4 nodes with 4 cores each, you could dedicate for example 3 per node to the YARN node manager and your cluster would have a total of 12 virtual CPUs.
Then you request the desired resources when submitting the Spark job (see http://spark.apache.org/docs/latest/submitting-applications.html for example) to the cluster and YARN will attempt to fulfill that request. If it can't be fulfilled, your Spark job (or application) will be queued up or there will eventually be a timeout.
You can configure different resource pools in YARN to guarantee a specific amount of memory/CPU resources to such a pool, but that's a little bit more advanced.
If you submit your Spark application in cluster mode, you have to consider that the Spark driver will run on a cluster node and not your local machine (that one that submitted it). Therefore it will require at least 1 virtual CPU more.
Hope that clarifies things a little for you.
I have an issue, i am running my web app on Linux machine using tomcat.
Issue is When i start my application :
1. It allocates 2 GB real memory
2. I execute data of 5 Million or something, it again increases to 2.5 GB
3. Issue arrives after shutting the Tomcat down, the Memory is not released at all.
System Details : 32 GB RAM, Ubuntu, JAVA 7
Software : DB = Oracle, Tomcat 7
thnks
First, please check if the process was killed when you shut down the Tomcat server. For example: ps -ef|grep tomcat.
I have a light-weight Hadoop environment:
2 namenodes(job tracker/HBase Master) + 3 datanodes(tasktracker/HBase Region)
They are all like two quad-core CPUs + 16-24G memory + total 15T
I am wondering what server specs the zookeepers would look like if I were to go for 3 zookeepers? Can anyone share some experience?
From HBase's perpective -
Give each ZooKeeper server around 1GB of RAM, and if possible, its own
dedicated disk (A dedicated disk is the best thing you can do to
ensure a performant ZooKeeper ensemble). For very heavily loaded
clusters, run ZooKeeper servers on separate machines from
RegionServers (DataNodes and TaskTrackers).
-Dedicated disk should be configured to store snapshots as the transaction logs grows.
-Suffcient RAM is requried so that it doesn't swap.
Weblogic 10.3 gives out of memory
Followings thing I have done
Increased the -Xms512m
Increased the -Xmx1024m
Increased the max perm size in setdomainenv.bat
Is there any other way to resolve this issue I have a 2 GB system?
It is a production machine and the size of the log is around 4 GB .When analysed the log I found many connection refused error
You'll need to profile your application to find the memory leak. It could be open database connections or other resources not being handled properly
Just increasing the Xms and Xmx wont work beyond a point
Take a Heap Dump into an HPROF file and run this using Eclipse Memory Analyzer Tool or VisualVM
or monitor this using JConsole