I have a Ruby application that uses eventmachine and starts 16 processes that each manage 1000 connections.
Initially each process only uses around 150MB, however after some runtime they consume more and more towards 500MB and I am running out of memory and swap.
The amount of open connections (indicated by EM.connection_count) is normal (around 1000 all the time), so there shouldn't really be references to old connections anymore.
Unfortunately, memprof only runs under Ruby 1.8, so this is not an option in my case.
I don't want to build the ITAPPMONROBOT for my application just so I can keep it running 24/7/365. How can I find the memory leak here or how can I help the GC?
There is a known memory leak in 1.9.2's Kernel#method that affects most EM applications. See http://groups.google.com/group/eventmachine/browse_thread/thread/fa56ff02440a624d and http://redmine.ruby-lang.org/issues/show/3466#note-3
Related
I'm working on a Ruby on Rails application which has a memory leak, so eventually it crashes when there's no more memory.
However, in the final stage it basically only running the GC and processing very few requests, so basically DoS-ing itself. This DoS time was between 1 hour and 6 hours for my application!
I tried to locate the memory leak but no luck so far, so now I want to find a workaround for the production server.
Is there a way to configure the MRI Ruby GC so that when it reaches the memory limit then it just crashes? I mean to crash at the first time when Ruby tries to allocate more memory and the operating system denies it.
As far as I know, you cannot do that.
But your have another options:
Setup something in your system, which will prevent ruby from using too much memory (oom maybe?)
Setup your webserver to kill itself - like in this gem
I am running redis on windows and I am having some performance issues. The machine is a Xeon E5 with 32GM RAM and SSD with HW-Raid with Windows Server 2012. There are some other processes running, but they are not critical and are idle most of the time.
I noticed performance problems and operations timeout very often, so I started "redis-cli --intrinsic-latency 100". The output shows that the max-latency goes up to 15000 microseconds, which is very slow I think.
I was also running a memory-profiler: The r/w-performance is not so good (5GB/sec) but I think this should not be the bottleneck. At the moment I have absolutly no idea what to try.
Can you give me some tipps how to find the performance problem?
There is no "fork" as in Linux in Windows. So when you dump your redis db, it can just "stop the world" in order to write on the disk "dump.rdb". Well, they did implement a "Copy-on-write" strategy that don't stop redis, it just copies values when dumping (the redis clients will still be able to get responses from redis). It is in their version log: https://github.com/MSOpenTech/Redis
There is a replacement for the UNIX fork() API that simulates the copy-on-write behavior using a memory mapped file.
This is the real bottleneck of redis in windows as it is an overhead and is more complex (bugs?). It is explained here:http://blogs.msdn.com/b/interoperability/archive/2012/04/26/here-s-to-the-first-release-from-ms-open-tech-redis-on-windows.aspx
As a result you could try running a redis on Linux to test if this is a performance issue of the windows port. Also, the more you write a dump.rdb, the bigger is the overhead (you can change the frequency or try disabling it completely for testing).
Finally, it could also be a network problem and you should check if it is not a network rule / hardware problem (not enough throughput! Bad cable or stuff, firewalls...). Are your redis clients on the same hardware machine?
I have been using a Windows port of Redis called "Memurai". They have a developer edition free of charge.
Now, in one of their blog they claim they have solved the fork() problem. See excerpt below.
Memurai performance seems good to me, even with persistence enabled (both RDB and AOF) although I have not run any specific test myself. There's another blog about Memurai perf in here.
It's worth giving it a try.
"Internally, Redis uses the fork() system call to perform asynchronous writes, but that’s not an option for Memurai because fork() doesn’t exist on Windows. Instead, Memurai uses Windows shared memory to implement a start-of-the-art version of fork() that’s finely tuned for performance and..."
I am not an OS expert, and I am having trouble understanding my server's memory usage. I need your advices to understand the following:
My server has 8 GB RAM and operates as web server. PHP, mySQL and Apache processes consume the majority of the memory. When I issue the command "free" after the system is rebooted, I would normally see something along these lines:
total used free shared buffers cached
Mem: 8059080 2277924 5781156 0 948 310852
-/+ buffers/cache: 1966124 6092956
Swap: 4194296 0 4092668
Obviously, sooner or later the free memory would drop and the cached memory would increase and I assume there is nothing wrong with that since the OS decides to cache it.
What I don't understand is about 1-2 days later after the machine is rebooted, I would slightly see an increase in the used swap memory. Does not this mean that the server does not have free memory anymore and using IO instead? How can I understand which processes cause this?
I am asking this question to stackoverflow users because if I ask it to my hosting provider, I am sure they would ask more money to increase RAM.
Thanks.
This is perfectly normal. When the machine starts up, a large number of services also start up. As they run their startup code, read their configuration, and so on, they dirty some pages of memory. Many of these services will never run again. By writing this data to swap, the operating system accomplishes two things:
First, if it ever does encounter memory pressure, it can discard the pages without having to write them first, since it has already written them. Second, it can discard the pages to make more free memory to enlarge the cache.
The alternative is to keep information that hasn't been touched in days in physical memory. And that just doesn't make sense.
I'm having issues with a site on my server loading and was running 'top' and saw this:
alt text http://share.shpigford.com/images/ruby_processes-20091112-103834.png
Dozens of ruby processes...and I have no idea what that means or if that's normal. :)
I have a feeling that your PassengerMaxPoolSize is set too high for such a small amount of memory. Just totaling that up your ruby processes are eating 81% of your available memory.
See this related discussion on ServerFault. This question should probably be migrated over there.
I don't know what is normal on your system.
In a sever production environment ruby scales by adding processes, so I would expect to see at least one process per CPU core. (Real or virtual - my i7 920 has 8 virtual cores and needs 8 ruby processes for a 100% CPU load.)
Dozens sound like a lot, but it could be possible if your site is using lots of ruby for miscellaneous daemon processes.
I think you'll have to ask someone who knows what is supposed to run on the system.
Windows Server 2008. How can I quickly use up RAM so to induce GC in my app. If there is a way to do it without needing Visual Studio or installing a language runtime it would be good.
EDIT: I don't want to have to write an app and then copy it over to the server. I'm looking for a way to do it quickly without writing an app that requires an IDE or installation of a runtime/compiler.
Perhaps a powershell or batch script?...
I don't think using up RAM outside your process is going to necessarily trigger GC.
If I understand your question correctly, you have a program Foo.exe that is written in some unknown language, running on some unknown runtime (are you not allowed to post the details for some reason, or do you just not know?), and you want to try to get that program's runtime to trigger a garbage collection. However, you want to do this by using up RAM outside of foo.exe.
You could do this by creating a simple batch file that just started up a hundred copies of IE or Word or whatever program you want. However, I don't think that will do what you want it to do. If your process has already allocated a certain amount of memory, it won't necessarily give that memory up or trigger GC just because other processes are being started. It may page to disk, or may force other programs to page to disk. But not all Garbage Collectors are alike, so we can't really help without more details. I'm pretty sure some VM's never give back memory once they've allocated it, even after GC.
You could run your program inside a virtual machine such as Virtual Box, where you specify the memory ceiling of the guest operating system.
I'm having trouble imagining a scenario where this would be necessary though. Could you provide more information about the problem?
If you are using java you can specify the max amount of memory using Xmx. Search for JVM memory setting