I have installed on my desktop machine (with windows 7) SQLServer 2008 R2 Express.
I have only one local server running (./SQLEXPRESS) but the sqlserver process is taking ALL the RAM possible.
With an machine with 3GB of RAM the things starts to get slow, so I limited the maximun amount of RAM in the server, and now, constantly the SQLServer give some error messages that the memory is not enought. It's using 1GB of RAM with only one LOCAL server with 2 databases completely empty, how 1GB of RAM isn't enought ?
When the process start it's using an really acceptable amount of memory (around 80MB) but it's keep increasing until it reaches the maximun defined and start to complain about having not enought memory available. In that point I have to restart the server to use it again.
I have read about an hotfix to solve one of the errors I got from sqlserver:
There is insufficient system memory in resource pool 'internal' to run this query
But it's already installed on my sqlserver.
Why it's using so much memory?
You can try configuring the 'max server memory' configuration option:
For additional details check:
http://technet.microsoft.com/en-us/library/ms178067(v=sql.105).aspx
http://support.microsoft.com/kb/321363
http://social.msdn.microsoft.com/Forums/en-US/sqldatabaseengine/thread/df51cd87-68ce-439a-87fa-d5e5cd93ab31
I had the problem like this.
You can increase the cache size of DB.
On MSSQL server properties, choose memory, there have "maximum server memory (in MB)" You can increase this cell.
Or same thing with query:
EXEC sp_configure'Show Advanced Options',1;
GO
RECONFIGURE;
GO
EXEC sp_configure'max server memory (MB)',3500;
GO
RECONFIGURE;
GO
Related
I keep getting this error when running some of my steps:
Container [pid=5784,containerID=container_1482150314878_0019_01_000015] is running beyond physical memory limits. Current usage: 5.6 GB of 5.5 GB physical memory used; 10.2 GB of 27.5 GB virtual memory used. Killing container.
I searched over the web and people say to increase the memory limits. This error is after I already increased to the maximum allowed on the instance I'm using c4.xlarge. Can I get some assistance about this error and how to solve this?
Also, I don't understand why mapreduce will throw this error and won't just swap or even work slower but just continue to work ...
NOTE: This error started happening after I changed to a custom output compression so it should be related to that.
Thanks!
Just downloaded latest version (2.1.7) of Apache Cassandra from official site.
Then I started the server without any changes on localhost and created table via GettingStarted Guide
I noticed, that all queries to the Cassandra server are very slow.
For example, this trivial query takes about 250ms:
SELECT * FROM users where user_id=1745;
Is it normal performance? I see much better performance for other database systems on the same machine.
May be I should tweak something?
I have:
Intel Core i5 CPU 2.27GHz
8GB RAM
Windows 8.1
Edit1:
Well.. I see something strange.
The trace log looks pretty nice (6ms):
But when I execute this query in DataStax DevStudio, it shows 476ms:
It cannot be network latency, because I use server on localhost.
Weblogic 10.3 gives out of memory
Followings thing I have done
Increased the -Xms512m
Increased the -Xmx1024m
Increased the max perm size in setdomainenv.bat
Is there any other way to resolve this issue I have a 2 GB system?
It is a production machine and the size of the log is around 4 GB .When analysed the log I found many connection refused error
You'll need to profile your application to find the memory leak. It could be open database connections or other resources not being handled properly
Just increasing the Xms and Xmx wont work beyond a point
Take a Heap Dump into an HPROF file and run this using Eclipse Memory Analyzer Tool or VisualVM
or monitor this using JConsole
I have RED5 installed on my virtual server (I need it for my chat application), which has 1GB of RAM memory. When I start my RED5 it takes approx. 1GB immediately after start and thats a problem, because then my whole site is very site. Iam sure it does not use the whole 1GB, so I need a solution how could I limit it to lets say 700MB.
I have tried such things in red5.sh:
export JAVA_OPTS="-Xms512m -Xmx1024m $LOGGING_OPTS $SECURITY_OPTS $JAVA_OPTS"
But without success.
EDIT - forgot to mention, i use debian on my VPS.
I have Liferay 6 with Tomcat system setup on two machines:
Machine 1:
Windows 2003 Server
2GB RAM, 2Gh CPU
Mysql Ver 14.14 Distrib 5.1.49
Liferay 6.0.6 with Tomcat 6
Machine 2:
Linux CentOS 5.5
4GB RAM, 2Gh CPU
Mysql Ver 14.14 Distrib 5.5.10
Liferay 6.0.6 with Tomcat 6
Both the liferay systems are having identical startup parameters and mysql configurations.
The liferay system contains a custom theme and a servlet filter hook checking each URL access.
We have written a Grinder script to test the load of the system starting with 50 concurrent users .
The test script does the following things:
Open home page
Login with username/password
Enter security key (custom portlet)
Move to a private community
Logout
On Windows system the response time is as expected (nearly 40 seconds mean time for each test in Grinder).
However on the Linux system the response time is too high (nearly 4mins) for the same operations.
We tried revising the mysql, tomcat, connection pool and few other parameters but all resulting the same. Also the liferay were tested using mysql of the other machine (machine 1 liferay -> machine 2 mysql)
We are facing the same issue on Linux machines in our test environment and also at our client's end.
This looks like a duplicate question. I suspect your issue is related to memory / jvm configuration and specifically garbage collection. High CPU utilization under small loads tend to point in that direction.
In your Grinder script, have you set each of the steps as a separate transaction? This will allow you to see how much time each step is taking. It might be useful to know if everything is slower across the board, or if it's just one transaction type that's slowing you down.
Also, is there anything in the Tomcat logs on the Linux box you aren't seeing on windows? Unexpected java stack traces, etc?
Finally, are the databases on each machine identical? Do they have the same amount of data? Do they have the same indexes?
edit: Is it one transaction that's taking up all the extra time, or is each transaction slower? When you run 'top' on your linux box, is it the tomcat java process that's eating all your CPU, or some other process?