RED5/Java memory limit - memory-management

I have RED5 installed on my virtual server (I need it for my chat application), which has 1GB of RAM memory. When I start my RED5 it takes approx. 1GB immediately after start and thats a problem, because then my whole site is very site. Iam sure it does not use the whole 1GB, so I need a solution how could I limit it to lets say 700MB.
I have tried such things in red5.sh:
export JAVA_OPTS="-Xms512m -Xmx1024m $LOGGING_OPTS $SECURITY_OPTS $JAVA_OPTS"
But without success.
EDIT - forgot to mention, i use debian on my VPS.

Related

Minify laravel views on local machine and push to server

we are using gulp in laravel to minify our views, the problem we are facing, server is unable to process gulp due to low ram of 512, is there any way we can minify the html on our local machine and then push it to our server?
I think you should solve this by making a swap space on your server.
Swap files increase the amount of virtual memory available to perform tasks like for example gulp.
Linux divides its physical RAM (random access memory) into chunks of
memory called pages. Swapping is the process whereby a page of memory
is copied to the preconfigured space on the hard disk, called swap
space, to free up that page of memory. The combined sizes of the
physical memory and the swap space is the amount of virtual memory
available.
from: https://wiki.archlinux.org/index.php/swap
Depending on what your server setup looks like you can find many guides on how to enable swap for your particular server.
Assuming you have Linux you can check if your server has any swap space allocated by running:
sudo swapon --show
and also
free -h
To create a swapfile you allocate it by:
sudo fallocate -l 1G /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
Which will give you a swap file of 1 GB.
You then have to secure the swap and configure swappiness etc for performance and this depends on your system.

Poor performance for a clean Apache Cassandra server installation

Just downloaded latest version (2.1.7) of Apache Cassandra from official site.
Then I started the server without any changes on localhost and created table via GettingStarted Guide
I noticed, that all queries to the Cassandra server are very slow.
For example, this trivial query takes about 250ms:
SELECT * FROM users where user_id=1745;
Is it normal performance? I see much better performance for other database systems on the same machine.
May be I should tweak something?
I have:
Intel Core i5 CPU 2.27GHz
8GB RAM
Windows 8.1
Edit1:
Well.. I see something strange.
The trace log looks pretty nice (6ms):
But when I execute this query in DataStax DevStudio, it shows 476ms:
It cannot be network latency, because I use server on localhost.

Hadoop 2.x on amazon ec2 t2.micro

I'm trying to install and configure Hadoop 2.6 on Amazon EC2 t2.micro instance (The Free one, with only 1GB RAM) in Pseudo-Distributed Mode.
I could configure and start all the daemons (ie. Namenode,Datanode,ResourceManager,NodeManager). But When I tried to run a mapreduce wordcount example, it is failing.
I dont know if its failing due to low memory ( Since t2.micro has only 1GB of memory and some of it is taken up by Host OS, Ubuntu in my case). Or can it be some other reason?
I'm using default memory settings. If I can tweak down everything to minimum memory settings will it solve the problem? What is the minimum memory in mb that can be assigned to containers.
Thanks a lot Guys. I'll appreciate if you can provide me with some information.
Without tweaking any memory settings I could run a pi example with 1 mapper and 1 reducer sometimes only on the free tier t2.micro instance, it fails most of the time.
By using the memory optimized r3.large instance with 15GB RAM everything works perfect. All jobs get completed without failure.

SQLServer using too much memory

I have installed on my desktop machine (with windows 7) SQLServer 2008 R2 Express.
I have only one local server running (./SQLEXPRESS) but the sqlserver process is taking ALL the RAM possible.
With an machine with 3GB of RAM the things starts to get slow, so I limited the maximun amount of RAM in the server, and now, constantly the SQLServer give some error messages that the memory is not enought. It's using 1GB of RAM with only one LOCAL server with 2 databases completely empty, how 1GB of RAM isn't enought ?
When the process start it's using an really acceptable amount of memory (around 80MB) but it's keep increasing until it reaches the maximun defined and start to complain about having not enought memory available. In that point I have to restart the server to use it again.
I have read about an hotfix to solve one of the errors I got from sqlserver:
There is insufficient system memory in resource pool 'internal' to run this query
But it's already installed on my sqlserver.
Why it's using so much memory?
You can try configuring the 'max server memory' configuration option:
For additional details check:
http://technet.microsoft.com/en-us/library/ms178067(v=sql.105).aspx
http://support.microsoft.com/kb/321363
http://social.msdn.microsoft.com/Forums/en-US/sqldatabaseengine/thread/df51cd87-68ce-439a-87fa-d5e5cd93ab31
I had the problem like this.
You can increase the cache size of DB.
On MSSQL server properties, choose memory, there have "maximum server memory (in MB)" You can increase this cell.
Or same thing with query:
EXEC sp_configure'Show Advanced Options',1;
GO
RECONFIGURE;
GO
EXEC sp_configure'max server memory (MB)',3500;
GO
RECONFIGURE;
GO

liferay performance issue on linux

I have Liferay 6 with Tomcat system setup on two machines:
Machine 1:
Windows 2003 Server
2GB RAM, 2Gh CPU
Mysql Ver 14.14 Distrib 5.1.49
Liferay 6.0.6 with Tomcat 6
Machine 2:
Linux CentOS 5.5
4GB RAM, 2Gh CPU
Mysql Ver 14.14 Distrib 5.5.10
Liferay 6.0.6 with Tomcat 6
Both the liferay systems are having identical startup parameters and mysql configurations.
The liferay system contains a custom theme and a servlet filter hook checking each URL access.
We have written a Grinder script to test the load of the system starting with 50 concurrent users .
The test script does the following things:
Open home page
Login with username/password
Enter security key (custom portlet)
Move to a private community
Logout
On Windows system the response time is as expected (nearly 40 seconds mean time for each test in Grinder).
However on the Linux system the response time is too high (nearly 4mins) for the same operations.
We tried revising the mysql, tomcat, connection pool and few other parameters but all resulting the same. Also the liferay were tested using mysql of the other machine (machine 1 liferay -> machine 2 mysql)
We are facing the same issue on Linux machines in our test environment and also at our client's end.
This looks like a duplicate question. I suspect your issue is related to memory / jvm configuration and specifically garbage collection. High CPU utilization under small loads tend to point in that direction.
In your Grinder script, have you set each of the steps as a separate transaction? This will allow you to see how much time each step is taking. It might be useful to know if everything is slower across the board, or if it's just one transaction type that's slowing you down.
Also, is there anything in the Tomcat logs on the Linux box you aren't seeing on windows? Unexpected java stack traces, etc?
Finally, are the databases on each machine identical? Do they have the same amount of data? Do they have the same indexes?
edit: Is it one transaction that's taking up all the extra time, or is each transaction slower? When you run 'top' on your linux box, is it the tomcat java process that's eating all your CPU, or some other process?

Resources