I have Liferay 6 with Tomcat system setup on two machines:
Machine 1:
Windows 2003 Server
2GB RAM, 2Gh CPU
Mysql Ver 14.14 Distrib 5.1.49
Liferay 6.0.6 with Tomcat 6
Machine 2:
Linux CentOS 5.5
4GB RAM, 2Gh CPU
Mysql Ver 14.14 Distrib 5.5.10
Liferay 6.0.6 with Tomcat 6
Both the liferay systems are having identical startup parameters and mysql configurations.
The liferay system contains a custom theme and a servlet filter hook checking each URL access.
We have written a Grinder script to test the load of the system starting with 50 concurrent users .
The test script does the following things:
Open home page
Login with username/password
Enter security key (custom portlet)
Move to a private community
Logout
On Windows system the response time is as expected (nearly 40 seconds mean time for each test in Grinder).
However on the Linux system the response time is too high (nearly 4mins) for the same operations.
We tried revising the mysql, tomcat, connection pool and few other parameters but all resulting the same. Also the liferay were tested using mysql of the other machine (machine 1 liferay -> machine 2 mysql)
We are facing the same issue on Linux machines in our test environment and also at our client's end.
This looks like a duplicate question. I suspect your issue is related to memory / jvm configuration and specifically garbage collection. High CPU utilization under small loads tend to point in that direction.
In your Grinder script, have you set each of the steps as a separate transaction? This will allow you to see how much time each step is taking. It might be useful to know if everything is slower across the board, or if it's just one transaction type that's slowing you down.
Also, is there anything in the Tomcat logs on the Linux box you aren't seeing on windows? Unexpected java stack traces, etc?
Finally, are the databases on each machine identical? Do they have the same amount of data? Do they have the same indexes?
edit: Is it one transaction that's taking up all the extra time, or is each transaction slower? When you run 'top' on your linux box, is it the tomcat java process that's eating all your CPU, or some other process?
Related
So really not sure where to ask this.
I have a Spring Boot application running and connected to a Postgres DB. These are the machine specs:
Laptop: 8GB RAM and 8 Core CPU.
Running application out of Intellij or running as stand-alone. Application and Postgres running on same machine. Time from my logon screen into application takes less than half a second.
Server: 16GB and 8 Core CPU.
Application running inside Tomcat 8.5. I also tried running application stand-alone. Application and Postgres running on same machine. Time from my logon screen into application takes about 15 seconds.
So I have no idea why it is so much slower on the server. I can give more info if you can tell me what else to look for?
Just downloaded latest version (2.1.7) of Apache Cassandra from official site.
Then I started the server without any changes on localhost and created table via GettingStarted Guide
I noticed, that all queries to the Cassandra server are very slow.
For example, this trivial query takes about 250ms:
SELECT * FROM users where user_id=1745;
Is it normal performance? I see much better performance for other database systems on the same machine.
May be I should tweak something?
I have:
Intel Core i5 CPU 2.27GHz
8GB RAM
Windows 8.1
Edit1:
Well.. I see something strange.
The trace log looks pretty nice (6ms):
But when I execute this query in DataStax DevStudio, it shows 476ms:
It cannot be network latency, because I use server on localhost.
I am using SolrCloud (version 4.7.1) with 4 instances and embedded ZooKeeper (test environment).
When I simulate failure of one of the instances, the indexing speed goes from 4 seconds to 17 seconds.
It goes back to 4 seconds after the instance is brought back to life.
Search speed is not affected.
Our production environment shows similar behavior (only the configuration is more complex).
Is this normal or did I miss some configuration option?
It is due to having Zookeeper embedded in Solr cluster.
Please try with external zookeeper. This setup give the expected results.
I am new to lxc and docker. Does docker max client count depend solely on CPU and RAM or are there some other factors associated with running multiple containers simultaneously?
As mentioned in the comments to your question, it will largely depend on the requirements of the applications inside the containers.
What follows is anecdotal data I collected for this answer (This is on a Macbook Pro with 8 cores, 16Gb and Docker running in VirtualBox with boot2docker 2Gb, using 2 MBP cores):
I was able to launch 242 (idle) redis containers before getting:
2014/06/30 08:07:58 Error: Cannot start container c4b49372111c45ae30bb4e7edb322dbffad8b47c5fa6eafad890e8df4b347ffa: pipe2: too many open files
After that, top inside the VM reports CPU use around 30%-55% user and 10%-12% system (every redis process seems to use 0.2%). Also, I get time outs while trying to connect to a redis server.
I have installed AppFabric on my machine. Everytime I restart my machine I have restart the AppFabric caching by running "start-cachecluster" in the powershell "Caching Administration Windows PowerShell". How can I set this up to run everytime the machine is restarted?
Change the startup type of the AppFabricCachingService from Manual to Automatic - see this answer.
You'll likely need to use a startup task, or your run-this-on-startup method of your choice, to invoke the command when the machine boots.
If you are using a network-share bound cluster config, you cannot auto-start the cluster as the hosts perform the lead host operations in-memory, which requires a particular startup order for Cluster, config, hosts, etc.
SQL Server configs should be able to support an auto-start scenario (just startmode=auto on the service) on the service, as the lead operations are offloads to the database config.
It's a desperately needed feature which makes AppFabric and HA nearly impossible.