Ruby big array and memory - ruby

I created a big array a, whose memory grew to ~500 MB:
a = []
t = Thread.new do
loop do
sleep 1
print "#{a.size} "
end
end
5_000_000.times do
a << [rand(36**10).to_s(36)]
end
puts "\n size is #{a.size}"
a = []
t.join
After that, I "cleared" a, but the allocated memory didn't change until I killed the process. Is there something special I need to do to remove all these data which were assigned to a from the memory?

If I use the Ruby Garbage Collection Profiler on a lightly modified version of your code:
GC::Profiler.enable
GC::Profiler.clear
a = []
5_000_000.times do
a << [rand(36**10).to_s(36)]
end
puts "\n size is #{a.size}"
a = []
GC::Profiler.report
I get the following output (on Ruby 1.9.3)(some columns and rows removed):
GC 60 invokes.
Index Invoke Time(sec) Use Size(byte) Total Size(byte) ...
1 0.109 131136 409200 ...
2 0.125 192528 409200 ...
...
58 33.484 199150344 260938656 ...
59 36.000 211394640 260955024 ...
The profile starts with 131 136 bytes used, and ends with 211 394 640 bytes used, without decreasing in size anywhere in the run, we can assume that no garbage collection has taken place.
If I then add a line of code which adds a single element to the array a, placed after a has grown to 5 million elements, and then has an empty array assigned to it:
GC::Profiler.enable
GC::Profiler.clear
a = []
5_000_000.times do
a << [rand(36**10).to_s(36)]
end
puts "\n size is #{a.size}"
a = []
# the only change is to add one element to the (now) empty array a
a << [rand(36**10).to_s(36)]
GC::Profiler.report
This changes the profiler output to (some columns and rows removed):
GC 62 invokes.
Index Invoke Time(sec) Use Size(byte) Total Size(byte) ...
1 0.156 131376 409200 ...
2 0.172 192792 409200 ...
...
59 35.375 211187736 260955024 ...
60 36.625 211395000 469679760 ...
61 41.891 2280168 307832976 ...
This profiler run now starts with 131 376 bytes used, which is similar to the previous run, grows, but ends with 2 280 168 bytes used, significantly lower than the previous profile run that ended with 211 394 640 bytes used, we can assume that garbage collection took place this during this run, probably triggered by our new line of code that adds an element to a.
The short answer is no, you don't need to do anything special to remove the data that was assigned to a, but hopefully this gives you the tools to prove it.

You can call GC.start(), but you might not want to. See for example: Ruby garbage collect for a discussion here on Stack Overflow. Basically, I'd let the garbage collector decide for itself when to run unless you have a compelling reason to force it.

Related

Large amount of memory allocated by net/protocol.rb:153

While running memory_profiler, I noticed a large amount of memory being allocated by Ruby's net/protocol.rb component. I call it when performing an HTTP request to a server to download a file. The file is 43.67MB large and net/protocol.rb alone allocates 262,011,476 bytes just to download it.
Looking at the "allocated memory by location" section in the profiler report below, I can see net/protocol.rb:172 and http/response.rb:334 allocating 50-60MB of memory each, which is about the size of the file, so that looks reasonable. However, the top most entry (net/protocol.rb:153) worries me: that's 200MB of memory, at least 4x the size of the file.
I have two questions:
Why does net/protocol need to allocate 5x the size of the file in order to download it?
Is there anything I can do to reduce the amount of memory used by net/protocol?
memory_profiler output:
Total allocated: 314461424 bytes (82260 objects)
Total retained: 0 bytes (0 objects)
allocated memory by gem
-----------------------------------
314461304 ruby-2.1.2/lib
120 client/lib
allocated memory by file
-----------------------------------
262011476 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/protocol.rb
52435727 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/response.rb
7971 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/header.rb
2178 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/uri/common.rb
1663 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http.rb
1260 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/generic_request.rb
949 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/uri/generic.rb
120 /Users/andre.debrito/git/techserv-cache/client/lib/connections/cache_server_connection.rb
80 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/uri/http.rb
allocated memory by location
-----------------------------------
200483909 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/protocol.rb:153
60548199 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/protocol.rb:172
52428839 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/response.rb:334
978800 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/protocol.rb:155
2537 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/response.rb:61
2365 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/header.rb:172
2190 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/response.rb:54
1280 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/response.rb:56
960 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/header.rb:62
836 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/header.rb:165
792 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/header.rb:13
738 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/uri/common.rb:125
698 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/header.rb:263
576 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/uri/common.rb:214
489 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/response.rb:40
480 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/uri/common.rb:127
360 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/header.rb:40
328 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http.rb:610
320 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/generic_request.rb:71
320 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/header.rb:30
320 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/header.rb:59
308 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/generic_request.rb:322
256 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http.rb:879
240 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/uri/generic.rb:1615
239 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/protocol.rb:211
232 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/generic_request.rb:38
224 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/uri/common.rb:181
200 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/header.rb:17
192 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/response.rb:42
179 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http.rb:877
169 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http.rb:1459
160 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http.rb:1029
160 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/header.rb:434
160 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/header.rb:435
160 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/header.rb:445
160 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/uri/generic.rb:1617
149 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/uri/generic.rb:1445
147 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http.rb:1529
129 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/protocol.rb:98
128 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http.rb:1475
120 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/header.rb:444
120 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/header.rb:446
120 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/header.rb:447
120 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/response.rb:29
120 /Users/andre.debrito/git/techserv-cache/client/lib/connections/cache_server_connection.rb:45
96 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http.rb:899
80 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/generic_request.rb:39
80 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/generic_request.rb:45
80 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/generic_request.rb:46
80 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/header.rb:145
allocated memory by class
-----------------------------------
309678360 String
3445304 Thread::Backtrace
981096 Array
352376 IO::EAGAINWaitReadable
1960 MatchData
1024 Hash
328 Net::HTTP
256 TCPSocket
256 URI::HTTP
128 Time
120 Net::HTTP::Get
120 Net::HTTPOK
96 Net::BufferedIO
allocated objects by gem
-----------------------------------
82259 ruby-2.1.2/lib
1 client/lib
allocated objects by file
-----------------------------------
81908 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/protocol.rb
129 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/response.rb
127 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/header.rb
28 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/uri/common.rb
23 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/generic_request.rb
23 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/uri/generic.rb
19 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http.rb
2 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/uri/http.rb
1 /Users/andre.debrito/git/techserv-cache/client/lib/connections/cache_server_connection.rb
allocated objects by location
-----------------------------------
36373 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/protocol.rb:153
24470 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/protocol.rb:155
21057 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/protocol.rb:172
48 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/response.rb:61
38 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/response.rb:54
32 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/response.rb:56
31 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/header.rb:172
24 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/header.rb:62
12 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/uri/common.rb:127
9 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/header.rb:40
8 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/generic_request.rb:71
8 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/header.rb:165
8 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/header.rb:30
8 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/header.rb:59
6 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/uri/common.rb:214
6 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/uri/generic.rb:1615
5 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/header.rb:263
4 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http.rb:1029
4 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/generic_request.rb:322
4 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/header.rb:17
4 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/header.rb:434
4 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/header.rb:435
4 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/header.rb:445
4 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/response.rb:42
4 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/uri/common.rb:125
4 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/uri/generic.rb:1617
3 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http.rb:1529
3 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/header.rb:444
3 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/header.rb:446
3 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/header.rb:447
3 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/response.rb:40
3 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/uri/generic.rb:1445
2 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http.rb:877
2 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/generic_request.rb:39
2 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/generic_request.rb:45
2 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/generic_request.rb:46
2 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/header.rb:13
2 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/header.rb:145
2 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/header.rb:31
2 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/protocol.rb:111
2 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/protocol.rb:144
2 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/protocol.rb:98
2 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/uri/common.rb:179
2 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/uri/common.rb:181
2 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/uri/common.rb:213
2 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/uri/generic.rb:1640
2 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/uri/generic.rb:1642
2 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/uri/generic.rb:343
2 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/uri/generic.rb:530
2 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/uri/generic.rb:557
allocated objects by class
-----------------------------------
47935 String
24519 Array
4894 IO::EAGAINWaitReadable
4894 Thread::Backtrace
7 MatchData
3 Hash
2 URI::HTTP
1 Net::BufferedIO
1 Net::HTTP
1 Net::HTTP::Get
1 Net::HTTPOK
1 TCPSocket
1 Time
retained memory by gem
-----------------------------------
NO DATA
retained memory by file
-----------------------------------
NO DATA
retained memory by location
-----------------------------------
NO DATA
retained memory by class
-----------------------------------
NO DATA
retained objects by gem
-----------------------------------
NO DATA
retained objects by file
-----------------------------------
NO DATA
retained objects by location
-----------------------------------
NO DATA
retained objects by class
-----------------------------------
NO DATA
Allocated String Report
-----------------------------------
11926 ""
7019 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/protocol.rb:172
4894 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/protocol.rb:153
10 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/response.rb:54
2 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/uri/common.rb:179
1 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/protocol.rb:67
4894 "Resource temporarily unavailable - read would block"
4894 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/protocol.rb:153
4894 "UTF-8"
4894 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/protocol.rb:153
4894 "read would block"
4894 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/protocol.rb:153
...
Relevant code:
report = MemoryProfiler.report do
begin
response = nil
Net::HTTP.new(uri.host, uri.port).start { |http|
request = Net::HTTP::Get.new uri.request_uri
response = http.request request
}
rescue Net::ReadTimeout => e
raise RequestTimeoutError.new(e.message)
rescue Exception => e
raise ServerConnectionError.new(e.message)
end
end
report.pretty_print
Network traffic data from Charles proxy:
Request Header: 168 bytes
Response Header: 288 bytes
Request: -
Response: 43.67 MB (45792735 bytes)
Total: 43.67 MB (45793191 bytes)
Almost all of those strings allocated in net/protocol.rb#L153 are short-lived and are reclaimed by the next GC run. Those allocated objects are thus pretty harmless and will not result in a significantly larger process size.
You get a lot of exceptions (which are used for control flow here to read form the socket) and the actual read data which is appended to the buffer. All of these operations create temporary (internally used) objects.
As such, you are probably measuring the wrong thing. What would probably make more sense is to:
measure the maximum RSS of the process (i.e. the "used" memory);
and to measure the amount of additional memory still allocated after the read.
You will notice that (depending on the memory pressure on your computer), the RSS will not grow significantly above the amount of actually read data and that the references memory after the read is about the same size as the read data with about no internal intermediate objects still referenced.

How to make ruby loops wait for 15 min after each 300 element?

I want to make ruby script that will print all followers for any account, but twitters API will gife me an error (too many requests) after 300 printed follower, how can i make loop to print the frist 300 then wait for 15 min then to start where its done to another 300?
you can do it like:
some_variable = 0
loop do
#**your code that puts element **
some_variable += 1
sleep(15*60) if (some_variable % 300).zero?
end

Garbage collector in Ruby 2.2 provokes unexpected CoW

How do I prevent the GC from provoking copy-on-write, when I fork my process ? I have recently been analyzing the garbage collector's behavior in Ruby, due to some memory issues that I encountered in my program (I run out of memory on my 60core 0.5Tb machine even for fairly small tasks). For me this really limits the usefulness of ruby for running programs on multicore servers. I would like to present my experiments and results here.
The issue arises when the garbage collector runs during forking. I have investigated three cases that illustrate the issue.
Case 1: We allocate a lot of objects (strings no longer than 20 bytes) in the memory using an array. The strings are created using a random number and string formatting. When the process forks and we force the GC to run in the child, all the shared memory goes private, causing a duplication of the initial memory.
Case 2: We allocate a lot of objects (strings) in the memory using an array, but the string is created using the rand.to_s function, hence we remove the formatting of the data compared to the previous case. We end up with a smaller amount of memory being used, presumably due to less garbage. When the process forks and we force the GC to run in the child, only part of the memory goes private. We have a duplication of the initial memory, but to a smaller extent.
Case 3: We allocate fewer objects compared to before, but the objects are bigger, such that the amount of memory allocated stays the same as in the previous cases. When the process forks and we force the GC to run in the child all the memory stays shared, i.e. no memory duplication.
Here I paste the Ruby code that has been used for these experiments. To switch between cases you only need to change the “option” value in the memory_object function. The code was tested using Ruby 2.2.2, 2.2.1, 2.1.3, 2.1.5 and 1.9.3 on an Ubuntu 14.04 machine.
Sample output for case 1:
ruby version 2.2.2
proces pid log priv_dirty shared_dirty
Parent 3897 post alloc 38 0
Parent 3897 4 fork 0 37
Child 3937 4 initial 0 37
Child 3937 8 empty GC 35 5
The exact same code has been written in Python and in all cases the CoW works perfectly fine.
Sample output for case 1:
python version 2.7.6 (default, Mar 22 2014, 22:59:56)
[GCC 4.8.2]
proces pid log priv_dirty shared_dirty
Parent 4308 post alloc 35 0
Parent 4308 4 fork 0 35
Child 4309 4 initial 0 35
Child 4309 10 empty GC 1 34
Ruby code
$start_time=Time.new
# Monitor use of Resident and Virtual memory.
class Memory
shared_dirty = '.+?Shared_Dirty:\s+(\d+)'
priv_dirty = '.+?Private_Dirty:\s+(\d+)'
MEM_REGEXP = /#{shared_dirty}#{priv_dirty}/m
# get memory usage
def self.get_memory_map( pids)
memory_map = {}
memory_map[ :pids_found] = {}
memory_map[ :shared_dirty] = 0
memory_map[ :priv_dirty] = 0
pids.each do |pid|
begin
lines = nil
lines = File.read( "/proc/#{pid}/smaps")
rescue
lines = nil
end
if lines
lines.scan(MEM_REGEXP) do |shared_dirty, priv_dirty|
memory_map[ :pids_found][pid] = true
memory_map[ :shared_dirty] += shared_dirty.to_i
memory_map[ :priv_dirty] += priv_dirty.to_i
end
end
end
memory_map[ :pids_found] = memory_map[ :pids_found].keys
return memory_map
end
# get the processes and get the value of the memory usage
def self.memory_usage( )
pids = [ $$]
result = self.get_memory_map( pids)
result[ :pids] = pids
return result
end
# print the values of the private and shared memories
def self.log( process_name='', log_tag="")
if process_name == "header"
puts " %-6s %5s %-12s %10s %10s\n" % ["proces", "pid", "log", "priv_dirty", "shared_dirty"]
else
time = Time.new - $start_time
mem = Memory.memory_usage( )
puts " %-6s %5d %-12s %10d %10d\n" % [process_name, $$, log_tag, mem[:priv_dirty]/1000, mem[:shared_dirty]/1000]
end
end
end
# function to delay the processes a bit
def time_step( n)
while Time.new - $start_time < n
sleep( 0.01)
end
end
# create an object of specified size. The option argument can be changed from 0 to 2 to visualize the behavior of the GC in various cases
#
# case 0 (default) : we make a huge array of small objects by formatting a string
# case 1 : we make a huge array of small objects without formatting a string (we use the to_s function)
# case 2 : we make a smaller array of big objects
def memory_object( size, option=1)
result = []
count = size/20
if option > 3 or option < 1
count.times do
result << "%20.18f" % rand
end
elsif option == 1
count.times do
result << rand.to_s
end
elsif option == 2
count = count/10
count.times do
result << ("%20.18f" % rand)*30
end
end
return result
end
##### main #####
puts "ruby version #{RUBY_VERSION}"
GC.disable
# print the column headers and first line
Memory.log( "header")
# Allocation of memory
big_memory = memory_object( 1000 * 1000 * 10)
Memory.log( "Parent", "post alloc")
lab_time = Time.new - $start_time
if lab_time < 3.9
lab_time = 0
end
# start the forking
pid = fork do
time = 4
time_step( time + lab_time)
Memory.log( "Child", "#{time} initial")
# force GC when nothing happened
GC.enable; GC.start; GC.disable
time = 8
time_step( time + lab_time)
Memory.log( "Child", "#{time} empty GC")
sleep( 1)
STDOUT.flush
exit!
end
time = 4
time_step( time + lab_time)
Memory.log( "Parent", "#{time} fork")
# wait for the child to finish
Process.wait( pid)
Python code
import re
import time
import os
import random
import sys
import gc
start_time=time.time()
# Monitor use of Resident and Virtual memory.
class Memory:
def __init__(self):
self.shared_dirty = '.+?Shared_Dirty:\s+(\d+)'
self.priv_dirty = '.+?Private_Dirty:\s+(\d+)'
self.MEM_REGEXP = re.compile("{shared_dirty}{priv_dirty}".format(shared_dirty=self.shared_dirty, priv_dirty=self.priv_dirty), re.DOTALL)
# get memory usage
def get_memory_map(self, pids):
memory_map = {}
memory_map[ "pids_found" ] = {}
memory_map[ "shared_dirty" ] = 0
memory_map[ "priv_dirty" ] = 0
for pid in pids:
try:
lines = None
with open( "/proc/{pid}/smaps".format(pid=pid), "r" ) as infile:
lines = infile.read()
except:
lines = None
if lines:
for shared_dirty, priv_dirty in re.findall( self.MEM_REGEXP, lines ):
memory_map[ "pids_found" ][pid] = True
memory_map[ "shared_dirty" ] += int( shared_dirty )
memory_map[ "priv_dirty" ] += int( priv_dirty )
memory_map[ "pids_found" ] = memory_map[ "pids_found" ].keys()
return memory_map
# get the processes and get the value of the memory usage
def memory_usage( self):
pids = [ os.getpid() ]
result = self.get_memory_map( pids)
result[ "pids" ] = pids
return result
# print the values of the private and shared memories
def log( self, process_name='', log_tag=""):
if process_name == "header":
print " %-6s %5s %-12s %10s %10s" % ("proces", "pid", "log", "priv_dirty", "shared_dirty")
else:
global start_time
Time = time.time() - start_time
mem = self.memory_usage( )
print " %-6s %5d %-12s %10d %10d" % (process_name, os.getpid(), log_tag, mem["priv_dirty"]/1000, mem["shared_dirty"]/1000)
# function to delay the processes a bit
def time_step( n):
global start_time
while (time.time() - start_time) < n:
time.sleep( 0.01)
# create an object of specified size. The option argument can be changed from 0 to 2 to visualize the behavior of the GC in various cases
#
# case 0 (default) : we make a huge array of small objects by formatting a string
# case 1 : we make a huge array of small objects without formatting a string (we use the to_s function)
# case 2 : we make a smaller array of big objects
def memory_object( size, option=2):
count = size/20
if option > 3 or option < 1:
result = [ "%20.18f"% random.random() for i in xrange(count) ]
elif option == 1:
result = [ str( random.random() ) for i in xrange(count) ]
elif option == 2:
count = count/10
result = [ ("%20.18f"% random.random())*30 for i in xrange(count) ]
return result
##### main #####
print "python version {version}".format(version=sys.version)
memory = Memory()
gc.disable()
# print the column headers and first line
memory.log( "header") # Print the headers of the columns
# Allocation of memory
big_memory = memory_object( 1000 * 1000 * 10) # Allocate memory
memory.log( "Parent", "post alloc")
lab_time = time.time() - start_time
if lab_time < 3.9:
lab_time = 0
# start the forking
pid = os.fork() # fork the process
if pid == 0:
Time = 4
time_step( Time + lab_time)
memory.log( "Child", "{time} initial".format(time=Time))
# force GC when nothing happened
gc.enable(); gc.collect(); gc.disable();
Time = 10
time_step( Time + lab_time)
memory.log( "Child", "{time} empty GC".format(time=Time))
time.sleep( 1)
sys.exit(0)
Time = 4
time_step( Time + lab_time)
memory.log( "Parent", "{time} fork".format(time=Time))
# Wait for child process to finish
os.waitpid( pid, 0)
EDIT
Indeed, calling the GC several times before forking the process solves the issue and I am quite surprised. I have also run the code using Ruby 2.0.0 and the issue doesn't even appear, so it must be related to this generational GC just like you mentioned.
However, if I call the memory_object function without assigning the output to any variables (I am only creating garbage), then the memory is duplicated. The amount of memory that is copied depends on the amount of garbage that I create - the more garbage, the more memory becomes private.
Any ideas how I can prevent this ?
Here are some results
Running the GC in 2.0.0
ruby version 2.0.0
proces pid log priv_dirty shared_dirty
Parent 3664 post alloc 67 0
Parent 3664 4 fork 1 69
Child 3700 4 initial 1 69
Child 3700 8 empty GC 6 65
Calling memory_object( 1000*1000) in the child
ruby version 2.0.0
proces pid log priv_dirty shared_dirty
Parent 3703 post alloc 67 0
Parent 3703 4 fork 1 70
Child 3739 4 initial 1 70
Child 3739 8 empty GC 15 56
Calling memory_object( 1000*1000*10)
ruby version 2.0.0
proces pid log priv_dirty shared_dirty
Parent 3743 post alloc 67 0
Parent 3743 4 fork 1 69
Child 3779 4 initial 1 69
Child 3779 8 empty GC 89 5
UPD2
Suddenly figured out why all the memory is going private if you format the string -- you generate garbage during formatting, having GC disabled, then enable GC, and you've got holes of released objects in your generated data. Then you fork, and new garbage starts to occupy these holes, the more garbage - more private pages.
So i added a cleanup function to run GC each 2000 cycles (just enabling lazy GC didn't help):
count.times do |i|
cleanup(i)
result << "%20.18f" % rand
end
#......snip........#
def cleanup(i)
if ((i%2000).zero?)
GC.enable; GC.start; GC.disable
end
end
##### main #####
Which resulted in(with generating memory_object( 1000 * 1000 * 10) after fork):
RUBY_GC_HEAP_INIT_SLOTS=600000 ruby gc-test.rb 0
ruby version 2.2.0
proces pid log priv_dirty shared_dirty
Parent 2501 post alloc 35 0
Parent 2501 4 fork 0 35
Child 2503 4 initial 0 35
Child 2503 8 empty GC 28 22
Yes, it affects performance, but only before forking, i.e. increase load time in your case.
UPD1
Just found criteria by which ruby 2.2 sets old object bits, it's 3 GC's, so if you add following before forking:
GC.enable; 3.times {GC.start}; GC.disable
# start the forking
you will get(the option is 1 in command line):
$ RUBY_GC_HEAP_INIT_SLOTS=600000 ruby gc-test.rb 1
ruby version 2.2.0
proces pid log priv_dirty shared_dirty
Parent 2368 post alloc 31 0
Parent 2368 4 fork 1 34
Child 2370 4 initial 1 34
Child 2370 8 empty GC 2 32
But this needs to be further tested concerning the behavior of such objects on future GC's, at least after 100 GC's :old_objects remains constant, so i suppose it should be OK
Log with GC.stat is here
By the way there's also option RGENGC_OLD_NEWOBJ_CHECK to create old objects from the beginning, but i doubt it's a good idea, but may be useful for a particular case.
First answer
My proposition in the comment above was wrong, actually bitmap tables are the savior.
(option = 1)
ruby version 2.0.0
proces pid log priv_dirty shared_dirty
Parent 14807 post alloc 27 0
Parent 14807 4 fork 0 27
Child 14809 4 initial 0 27
Child 14809 8 empty GC 6 25 # << almost everything stays shared <<
Also had by hand and tested Ruby Enterprise Edition it's only half better than worst cases.
ruby version 1.8.7
proces pid log priv_dirty shared_dirty
Parent 15064 post alloc 86 0
Parent 15064 4 fork 2 84
Child 15065 4 initial 2 84
Child 15065 8 empty GC 40 46
(I made the script run strictly 1 GC, by increasing RUBY_GC_HEAP_INIT_SLOTS to 600k)

ArgumentError: In `load': marshal data too short

I want to realize multiple processes. I have to send the data which bubble-sorted in different child processes back to parent process then merge data. This is part of my code:
rd1,wt1 = IO.pipe # reader & writer
pid1 = fork {
rd1.close
numbers = Marshal.load(Marshal.dump(copylist[0,p]))
bubble_sort(numbers)
sList[0] = numbers.clone
wt1.write Marshal.dump(sList[0])
Process.exit!(true)
}
Process.waitpid(pid1)
Process.waitpid(pid2)
wt1.close
wt2.close
pid5 = fork {
rd5.close
a = Marshal.load(rd1.gets)
b = Marshal.load(rd2.gets)
mList[0] = merge( a,b).clone
wt5.write Marshal.dump(mList[0])
Process.exit!(true)
}
There are pid1...pid7, rd1...rd7, wt1...wt7. pid1...pid4 are bubble-sort 4 part of data. pid5 and 6 merge data from pid1, 2 and pid 3, 4. Finally, pid7 merges the data from pid5 and 6.
When data size is small, it succeeds, but when I input larger data (10000):
Data example : 121 45 73 89 11 452 515 32 1 99 4 88 41 53 159 482 2013 2 ...
then, errors occur: :in 'load': marshal data too short (ArgumentError) and another kind error: in 'load': instance of IO needed (TypeError). The first error line is in pid5: a = ... and pid6: b = .... The other kind of error line is in pid7: b = .... Are my data too big for this method?
Marshal.load and Marshal.dump work with binary data. The problem with the short reads is here:
a = Marshal.load(rd1.gets)
b = Marshal.load(rd2.gets)
#gets reads up to a new-line (or end of file) and then stops. The trouble is that new-line may be present in the binary data created by Marshal.dump.
Change gets to read in both lines.

Ruby IO#read max length for single read

How can i determine the max length IO#read can get in a single read on the current platform?
irb(main):301:0> File.size('C:/large.file') / 1024 / 1024
=> 2145
irb(main):302:0> s = IO.read 'C:/large.file'
IOError: file too big for single read
That message comes from io.c, remain_size. It is emitted when the (remaining) size of the file is greater or equal to LONG_MAX. That value depends on the platform your Ruby has been compiled with.
At least in Ruby 1.8.7, the maximum value for Fixnums happens to be just half of that value (-1), so you could get the limit by
2 * 2 ** (1..128).to_a.find { | i | (1 << i).kind_of? Bignum } - 1
You should rather not rely on that.

Resources