I have an issue, i am running my web app on Linux machine using tomcat.
Issue is When i start my application :
1. It allocates 2 GB real memory
2. I execute data of 5 Million or something, it again increases to 2.5 GB
3. Issue arrives after shutting the Tomcat down, the Memory is not released at all.
System Details : 32 GB RAM, Ubuntu, JAVA 7
Software : DB = Oracle, Tomcat 7
thnks
First, please check if the process was killed when you shut down the Tomcat server. For example: ps -ef|grep tomcat.
Related
So really not sure where to ask this.
I have a Spring Boot application running and connected to a Postgres DB. These are the machine specs:
Laptop: 8GB RAM and 8 Core CPU.
Running application out of Intellij or running as stand-alone. Application and Postgres running on same machine. Time from my logon screen into application takes less than half a second.
Server: 16GB and 8 Core CPU.
Application running inside Tomcat 8.5. I also tried running application stand-alone. Application and Postgres running on same machine. Time from my logon screen into application takes about 15 seconds.
So I have no idea why it is so much slower on the server. I can give more info if you can tell me what else to look for?
Dmgr console of development environment is very slow, we checked from all aspects but unable to find the exact reason.
Dev WebSphere is running in AIX server which has 20 GB of RAM initially, we even increased the RAM to 8 GB but still facing the slowness with 28 GB of RAM.
And we have 10 different JVMs running in 10 differnt clusters in Dev which shares the RAM like below
JVM1 1 Gb JVM2 2 JVM3 2 JVM4 1 JVM5 2 JVM1 1 JVM1 1 JVM1 1 JVM1 2 JVM1 2 DMGR 2 Nodeagent 256 MB
So total of 17.6 GB (of 28) is used for RAM, but still we facing slowness in DMGR while
1.) Navigating
2.) Giving Node Synchronisation
3.) Starting of the DMGR
4.) And we have 24 applications running in Dev with 4 to 5 applications has 330 MB of size deployed in some JVMs having 2GB of RAM (will it this could be one of the reason?)
What could be the possible reason for this dmgr slowness? Can anyone tell me
A low max JVM heap size on the dmgr JVM can cause the interactive bits of the console to act mysteriously slow.
You can change the heap size pretty easily:
http://www-01.ibm.com/support/docview.wss?uid=swg21329319
In the navigation panel, click System Administration > Deployment
Manager > Process definition.
Under Additional Properties, click
Java Virtual Machine. Type 1024 in the Maximum Heap Size field.
Save
the changes to the master repository.
Restart all servers, node
agents, and the deployment manager.
We removed hosts from the virtual hosts that are no longer used/not on the network anymore. This seems to have helped.
I am new to lxc and docker. Does docker max client count depend solely on CPU and RAM or are there some other factors associated with running multiple containers simultaneously?
As mentioned in the comments to your question, it will largely depend on the requirements of the applications inside the containers.
What follows is anecdotal data I collected for this answer (This is on a Macbook Pro with 8 cores, 16Gb and Docker running in VirtualBox with boot2docker 2Gb, using 2 MBP cores):
I was able to launch 242 (idle) redis containers before getting:
2014/06/30 08:07:58 Error: Cannot start container c4b49372111c45ae30bb4e7edb322dbffad8b47c5fa6eafad890e8df4b347ffa: pipe2: too many open files
After that, top inside the VM reports CPU use around 30%-55% user and 10%-12% system (every redis process seems to use 0.2%). Also, I get time outs while trying to connect to a redis server.
Weblogic 10.3 gives out of memory
Followings thing I have done
Increased the -Xms512m
Increased the -Xmx1024m
Increased the max perm size in setdomainenv.bat
Is there any other way to resolve this issue I have a 2 GB system?
It is a production machine and the size of the log is around 4 GB .When analysed the log I found many connection refused error
You'll need to profile your application to find the memory leak. It could be open database connections or other resources not being handled properly
Just increasing the Xms and Xmx wont work beyond a point
Take a Heap Dump into an HPROF file and run this using Eclipse Memory Analyzer Tool or VisualVM
or monitor this using JConsole
I have Liferay 6 with Tomcat system setup on two machines:
Machine 1:
Windows 2003 Server
2GB RAM, 2Gh CPU
Mysql Ver 14.14 Distrib 5.1.49
Liferay 6.0.6 with Tomcat 6
Machine 2:
Linux CentOS 5.5
4GB RAM, 2Gh CPU
Mysql Ver 14.14 Distrib 5.5.10
Liferay 6.0.6 with Tomcat 6
Both the liferay systems are having identical startup parameters and mysql configurations.
The liferay system contains a custom theme and a servlet filter hook checking each URL access.
We have written a Grinder script to test the load of the system starting with 50 concurrent users .
The test script does the following things:
Open home page
Login with username/password
Enter security key (custom portlet)
Move to a private community
Logout
On Windows system the response time is as expected (nearly 40 seconds mean time for each test in Grinder).
However on the Linux system the response time is too high (nearly 4mins) for the same operations.
We tried revising the mysql, tomcat, connection pool and few other parameters but all resulting the same. Also the liferay were tested using mysql of the other machine (machine 1 liferay -> machine 2 mysql)
We are facing the same issue on Linux machines in our test environment and also at our client's end.
This looks like a duplicate question. I suspect your issue is related to memory / jvm configuration and specifically garbage collection. High CPU utilization under small loads tend to point in that direction.
In your Grinder script, have you set each of the steps as a separate transaction? This will allow you to see how much time each step is taking. It might be useful to know if everything is slower across the board, or if it's just one transaction type that's slowing you down.
Also, is there anything in the Tomcat logs on the Linux box you aren't seeing on windows? Unexpected java stack traces, etc?
Finally, are the databases on each machine identical? Do they have the same amount of data? Do they have the same indexes?
edit: Is it one transaction that's taking up all the extra time, or is each transaction slower? When you run 'top' on your linux box, is it the tomcat java process that's eating all your CPU, or some other process?