So really not sure where to ask this.
I have a Spring Boot application running and connected to a Postgres DB. These are the machine specs:
Laptop: 8GB RAM and 8 Core CPU.
Running application out of Intellij or running as stand-alone. Application and Postgres running on same machine. Time from my logon screen into application takes less than half a second.
Server: 16GB and 8 Core CPU.
Application running inside Tomcat 8.5. I also tried running application stand-alone. Application and Postgres running on same machine. Time from my logon screen into application takes about 15 seconds.
So I have no idea why it is so much slower on the server. I can give more info if you can tell me what else to look for?
Related
I am trying to deploy a Go application inside the Kubernetes cluster. My application uses the goracle.v2 library to connect to the Oracle database.
The problem only happens when my application is running inside the Kubernetes cluster. I have a process that executes a stored procedure and returns a cursor, and it often takes more than 10 minutes to execute.
When this happens the active session that was there in the database ends and the pod that was running with the application is stopped and nothing else happens. And this scenario only happens when it runs inside the cluster. If I run the app locally it doesn't happen even if the process takes more than 10 minutes.
Anyone have any idea what might be happening?
Just downloaded latest version (2.1.7) of Apache Cassandra from official site.
Then I started the server without any changes on localhost and created table via GettingStarted Guide
I noticed, that all queries to the Cassandra server are very slow.
For example, this trivial query takes about 250ms:
SELECT * FROM users where user_id=1745;
Is it normal performance? I see much better performance for other database systems on the same machine.
May be I should tweak something?
I have:
Intel Core i5 CPU 2.27GHz
8GB RAM
Windows 8.1
Edit1:
Well.. I see something strange.
The trace log looks pretty nice (6ms):
But when I execute this query in DataStax DevStudio, it shows 476ms:
It cannot be network latency, because I use server on localhost.
I have an issue, i am running my web app on Linux machine using tomcat.
Issue is When i start my application :
1. It allocates 2 GB real memory
2. I execute data of 5 Million or something, it again increases to 2.5 GB
3. Issue arrives after shutting the Tomcat down, the Memory is not released at all.
System Details : 32 GB RAM, Ubuntu, JAVA 7
Software : DB = Oracle, Tomcat 7
thnks
First, please check if the process was killed when you shut down the Tomcat server. For example: ps -ef|grep tomcat.
I am new to lxc and docker. Does docker max client count depend solely on CPU and RAM or are there some other factors associated with running multiple containers simultaneously?
As mentioned in the comments to your question, it will largely depend on the requirements of the applications inside the containers.
What follows is anecdotal data I collected for this answer (This is on a Macbook Pro with 8 cores, 16Gb and Docker running in VirtualBox with boot2docker 2Gb, using 2 MBP cores):
I was able to launch 242 (idle) redis containers before getting:
2014/06/30 08:07:58 Error: Cannot start container c4b49372111c45ae30bb4e7edb322dbffad8b47c5fa6eafad890e8df4b347ffa: pipe2: too many open files
After that, top inside the VM reports CPU use around 30%-55% user and 10%-12% system (every redis process seems to use 0.2%). Also, I get time outs while trying to connect to a redis server.
I have Liferay 6 with Tomcat system setup on two machines:
Machine 1:
Windows 2003 Server
2GB RAM, 2Gh CPU
Mysql Ver 14.14 Distrib 5.1.49
Liferay 6.0.6 with Tomcat 6
Machine 2:
Linux CentOS 5.5
4GB RAM, 2Gh CPU
Mysql Ver 14.14 Distrib 5.5.10
Liferay 6.0.6 with Tomcat 6
Both the liferay systems are having identical startup parameters and mysql configurations.
The liferay system contains a custom theme and a servlet filter hook checking each URL access.
We have written a Grinder script to test the load of the system starting with 50 concurrent users .
The test script does the following things:
Open home page
Login with username/password
Enter security key (custom portlet)
Move to a private community
Logout
On Windows system the response time is as expected (nearly 40 seconds mean time for each test in Grinder).
However on the Linux system the response time is too high (nearly 4mins) for the same operations.
We tried revising the mysql, tomcat, connection pool and few other parameters but all resulting the same. Also the liferay were tested using mysql of the other machine (machine 1 liferay -> machine 2 mysql)
We are facing the same issue on Linux machines in our test environment and also at our client's end.
This looks like a duplicate question. I suspect your issue is related to memory / jvm configuration and specifically garbage collection. High CPU utilization under small loads tend to point in that direction.
In your Grinder script, have you set each of the steps as a separate transaction? This will allow you to see how much time each step is taking. It might be useful to know if everything is slower across the board, or if it's just one transaction type that's slowing you down.
Also, is there anything in the Tomcat logs on the Linux box you aren't seeing on windows? Unexpected java stack traces, etc?
Finally, are the databases on each machine identical? Do they have the same amount of data? Do they have the same indexes?
edit: Is it one transaction that's taking up all the extra time, or is each transaction slower? When you run 'top' on your linux box, is it the tomcat java process that's eating all your CPU, or some other process?