Efficiency of each_char vs. getc (and analagous methods) in Ruby - ruby

How do the iterator methods such as each_char, each_line. etc. compare to while-loopedgetc,gets, etc. methods for reading large files? Mainly, what is the overhead for using each method, which one will use more memory, and which one will be faster?
Essentially, which will be better in terms of memory, overhead, and speed if file is a 100MB text file?
file.each_char{
|ch|
#process ch
}
vs
ch = ""
until(file.eof?)
ch = file.getc
#process ch
end
Or is there an even better method of doing this?

You can easily answer questions like this definitively using the Ruby Standard Library's benchmark package.
Or is there an even better method of doing this?
I think so. While a C program might reasonably process a file one character at a time, Ruby has a towering edifice of String and Array functionality built-in, and that's all written in C so it runs quickly.
It may not seem efficient to split a line into words and then just use a few, or just count them, or whatever, but it's probably a lot faster than parsing that line one character at a time and it's more easily read and rewritten if necessary.
In general, I would suggest that a Ruby program leverage the library and do as much as possible working with objects that are as abstract as possible.

Related

When to reuse functions?

I have a function in my program that generates random strings.
func randString(s []rune, l int) string
s is a slice of runes containing the possible characters in the string. I pass
in a rune slice of both capital and lowercase alphabetic characters. l
determines the length of the string. This works great. But I also need to
generate random hex strings for html color codes.
It seems all sources say that it's good programming practice to reuse code. So I
made another []rune that held [1-9a-f] and feed that into randString. That
was before I realized that the stdlib already inclues formatting verbs for int
types that suit me perfectly.
In practice, is it better to reuse my randString function or code a separate
(more efficient) function? I would generate a single random int and Sprintf it
rather than having to loop and generate 6 random ints which randString does.
1) If there is an exact solution in the standard library, you should like always choose to use that.
Because:
The standard library is tested. So it does what it says (or what we expect it to do). Even if there is a bug in it, it will be discovered (by you or by others) and will get fixed without your work/effort.
The standard library is written as idiomatic Go. Chances are it's faster even if it does a little more than what you need compared to the solution you could write.
The standard library is (or may) improve by time. Your program may get faster just because an implementation was improved in a new Go release without any effort from your part.
The solution is presented (which means it's ready and requires no time from you).
The standard library is well and widely known, so your code will be easier to understand by others and by you later on.
If you're already imported the package (or will in the near future), this means zero or minimal overhead as libraries are statically linked, so the function you need is already linked to your program (to the compiled executable binary).
2) If there is a solution provided by the standard library but it is a general solution to similar problems and/or offers more than what you need:
That means it's more likely not the optimal solution for you, as it may use more memory and/or work more slowly as your solution could be.
You need to decide if you're willing to sacrifice that little performance loss for the gains listed above. This also depends how and how many times you need to use it (e.g. if it's a one-time, it shouldn't matter, if it's in an endless loop called very frequently, it should be examined carefully).
3) And at the other end: you should avoid using a solution provided by the standard library if it wasn't designed to solve your problem...
If it just happens that its "side-effect" solves your problem: Even if the current implementation would be acceptable, if it was designed for something else, future improvements to it could render your usage of it completely useless or could even break it.
Not to mention it would confuse other developers trying to read, improve or use your code (you included, after a certain amount of time).
As a side note: this question is exactly about the function you're trying to create: How to generate a random string of a fixed length in golang? I've presented mutiple very efficient solutions.
This is fairly subjective and not go-specific but I think you shouldn't reuse code just for the sake of reuse. The more code you reuse the more dependencies you create between different parts of your app and as result it becomes more difficult to maintain and modify. Easy to understand and modify code is much more important especially if you work in a team.
For your particular example I would do the following.
If a random color is generated only once in your package/application then using fmt.Sprintf("#%06x", rand.Intn(256*256*256)) is perfectly fine (as suggested by Dave C).
If random colors are generated in multiple places I would create function func randColor() string and call it. Note that now you can optimize randColor implementation however you like without changing the rest of the code. For example you could have implemented randColor using randString initially and then switched to a more efficient implementation later.

Imperative vs Functional Programming in Ruby

I am reading this article about how to program in Ruby in Functional Style.
https://code.google.com/p/tokland/wiki/RubyFunctionalProgramming
One of the examples that took my attention is the following:
# No (mutable):
output = []
output << 1
output << 2 if i_have_to_add_two
output << 3
# Yes (immutable):
output = [1, (2 if i_have_to_add_two), 3].compact
Whereas the "mutable" option is less safe because we changing the value of the array, the inmutable one seems less efficient because it calls .compact. That means that it has to iterate the array to return a new one without the nil's values.
In that case, which option is preferible? And in general how do you choose between immutability (functional) vs performance (in the case when the imperative solution is faster)?
You're not wrong. It is often the case that a purely functional solution will be slower than a destructive one. Immutable values generally mean a lot more allocation has to go on unless the language is very well optimized for them (which Ruby isn't).
However, it doesn't often matter. Worrying about the performance of specific operations is not a great use of your time 99% of the time. Shaving off a microsecond from a piece of code that runs 100 times a second is simply not a win.
The best approach is usually to do whatever makes your code cleanest. Very often this means taking advantage of the functional features of the language — for example, map and select instead of map! and keep_if. Then, if you need to speed things up, you have a nice, clean codebase that you can make changes to without fear that your changes will make one piece of code stomp over another piece's data.

How would I write a syntax checker?

I am interested in writing a syntax checker for a language. Basically what I want to do is make a cli tool that will take an input file, and then write errors that it finds. The language I would want to parse is basically similar to Turing, and it is rather ugly and sometimes a pain to work with. The only other syntax checker for it must be used
What language should I use? I figured I would write it in Ruby, but Python may be faster or have better parsing libraries.
What libraries should I use, in Ruby or Pearl? Which would be easier.
Is there a primer to read for defining a grammar? Such a task can become confusing, and I'm not sure how I would handle it.
If it were me, I would write it in Ruby, and worry about speed later. If the program is a runaway hit, I might add a native gem to speed up the slowest bit, but leave most of it in Ruby. If it becomes the most important program in the world, or if I had nothing else to do, I might rewrite it in C or C++ at that point, but not before.
And I would do all parsing using Treetop.
I might add that writing and optimizing a language parser directly in C is an interesting learning experience. You get roughly no string handling help, so you end up doing all the parsing, but you have a chance to do only the minimum amount of processing. It's sort of the opposite of the Ruby experience. To get maximum speed you end up doing things like writing frond-ends for malloc, where multiple objects you know you never have to free get allocated permanently within a malloced block. Although it is typical to use yacc(1) with C/C++, you can certainly write a recursive-descent parser and have an even deeper learning experience.
Of course, having done all that already, I'm happy to stick with Ruby these days.

Matching regular expressions against non-Strings in Ruby without conversion

If a Ruby regular expression is matching against something that isn't a String, the to_str method is called on that object to get an actual String to match against. I want to avoid this behavior; I'd like to match regular expressions against objects that aren't Strings, but can be logically thought of as randomly accessible sequences of bytes, and all accesses to them are mediated through a byte_at() method (similar in spirit to Java's CharSequence.char_at() method).
For example, suppose I want to find the byte offset in an arbitrary file of an arbitrary regular expression; the expression might be multi-line, so I can't just read in a line at a time and look for a match in each line. If the file is very big, I can't fit it all in memory, so I can't just read it in as one big string. However, it would be simple enough to define a method that gets the nth byte of a file (with buffering and caching as needed for speed).
Eventually, I'd like to build a fully featured rope class, like in Ruby Quiz #137, and I'd like to be able to use regular expressions on them without the performance loss of converting them to strings.
I don't want to get up to my elbows in the innards of Ruby's regular expression implementation, so any insight would be appreciated.
You can't. This wasn't supported in Ruby 1.8.x, probably because it's such an edge case; and in 1.9 it wouldn't even make sense. Ruby 1.9 doesn't map its strings to bytes in any user-serviceable fashion; instead it uses character code points, so that it can support the multitude of encodings that it accepts. And 1.9's new optimized regex engine, Oniguruma, is also built around the same concept of encodings and code points. Bytes just don't enter into the picture at this level.
I have a suspicion that what you're asking for is a case of premature optimization. For any reasonable Ruby object, implementing to_str shouldn't be a huge performance hurdle. If it is, then Ruby's probably the wrong tool for you, as it abstracts and insulates you from your raw data in all sorts of ways.
Your example of looking for a byte sequence in a large binary file isn't an ideal use case for Ruby -- you'd be better off using grep or some other Unix tool. If you need the results in your Ruby program, run it as a system process using backticks and process the output.

What are your strategies to keep the memory usage low?

Ruby is truly memory-hungry - but also worth every single bit.
What do you do to keep the memory usage low? Do you avoid big strings and use smaller arrays/hashes instead or is it no problem to concern about for you and let the garbage collector do the job?
Edit: I found a nice article about this topic here - old but still interesting.
I've found Phusion's Ruby Enterprise Edition (a fork of mainline Ruby with much-improved garbage collection) to make a dramatic difference in memory usage... Plus, they've made it extraordinarily easy to install (and to remove, if you find the need).
You can find out more and download it on their website.
I really don't think it matters all that much.
Making your code less readable in order to improve memory consumption is something you should only ever do if you need it. And by need, I mean have a specific case for the performance profile and specific metrics that indicate that any change will address the issue.
If you have an application where memory is going to be the limiting factor, then Ruby may not be the best choice. That said, I have found that my Rails apps generally consume about 40-60mb of RAM per Mongrel instance. In the scheme of things, this isn't very much.
You might be able to run your application on the JVM with JRuby - the Ruby VM is currently not as advanced as the JVM for memory management and garbage collection. The 1.9 release is adding many improvements and there are alternative VM's under development as well.
Choose date structures that are efficient representations, scale well, and do what you need.
Use algorithms that work using efficient data structures rather than bloated, but easier ones.
Look else where. Ruby has a C bridge and its much easier to be memory conscious in C than in Ruby.
Ruby developers are quite lucky since they don’t have to manage the memory themselves.
Be aware that ruby allocates objects, for instance something as simple as
100.times{ 'foo' }
allocates 100 string objects (strings are mutable and each version requires its own memory allocation).
Make sure that if you are using a library allocating a lot of objects, that other alternatives are not available and your choice is worth paying the garbage collector cost. (you might not have a lot of requests/s or might not care for a few dozen ms per requests).
Creating a hash object really allocates more than an object, for instance
{'joe' => 'male', 'jane' => 'female'}
doesn’t allocate 1 object but 7. (one hash, 4 strings + 2 key strings)
If you can use symbol keys as they won’t be garbage collected. However because they won’t be garbage collected you want to make sure to not use totally dynamic keys like converting the username to a symbol, otherwise you will ‘leak’ memory.
Example: Somewhere in your app, you apply a to_sym on an user’s name like :
hash[current_user.name.to_sym] = something
When you have hundreds of users, that’s could be ok, but what is happening if you have one million of users ? Here are the numbers :
ruby-1.9.2-head >
# Current memory usage : 6608K
# Now, add one million randomly generated short symbols
ruby-1.9.2-head > 1000000.times { (Time.now.to_f.to_s).to_sym }
# Current memory usage : 153M, even after a Garbage collector run.
# Now, imagine if symbols are just 20x longer than that ?
ruby-1.9.2-head > 1000000.times { (Time.now.to_f.to_s * 20).to_sym }
# Current memory usage : 501M
Be aware to never convert non controlled arguments in symbol or check arguments before, this can easily lead to a denial of service.
Also remember to avoid nested loops more than three levels deep because it makes the maintenance difficult. Limiting nesting of loops and functions to three levels or less is a good rule of thumb to keep the code performant.
Here are some links in regards:
http://merbist.com
http://blog.monitis.com
When deploying a Rails/Rack webapp, use REE or some other copy-on-write friendly interpreter.
Tweak the garbage collector (see https://www.engineyard.com/blog/tuning-the-garbage-collector-with-ruby-1-9-2 for example)
Try to cut down the number of external libraries/gems you use since additional code uses memory.
If you have a part of your app that is really memory-intensive then it's maybe worth rewriting it in a C extension or completing it by invoking other/faster/better optimized programs (if you have to process vast amounts of text data, maybe you can replace that code with calls to grep, awk, sed etc.)
I am not a ruby developer but I think some techniques and methods are true of any language:
Use the minimum size variable suitable for the job
Destroy and close variables and connections when not in use
However if you have an object you will need to use many times consider keeping it in scope
Any loops with manipulations of a big string dp the work on a smaller string and then append to bigger string
Use decent (try catch finally) error handling to make sure objects and connections are closed
When dealing with data sets only return the minimum necessary
Other than in extreme cases memory usage isn't something to worry about. The time you spend trying to reduce memory usage will buy a LOT of gigabytes.
Take a look at Small Memory Software - Patterns for Systems with Limited Memory. You don't specify what sort of memory constraint, but I assume RAM. While not Ruby-specific, I think you'll find some useful ideas in this book - the patterns cover RAM, ROM and secondary storage, and are divided into major techniques of small data structures, memory allocation, compression, secondary storage, and small architecture.
The only thing we've ever had which has actually been worth worrying about is RMagick.
The solution is to make sure you're using RMagick version 2, and call Image#destroy! when you're done using your image
Avoid code like this:
str = ''
veryLargeArray.each do |foo|
str += foo
# but str << foo is fine (read update below)
end
which will create each intermediate string value as a String object and then remove its only reference on the next iteration. This junks up the memory with tons of increasingly long strings that have to be garbage collected.
Instead, use Array#join:
str = veryLargeArray.join('')
This is implemented in C very efficiently and doesn't incur the String creation overhead.
UPDATE: Jonas is right in the comment below. My warning holds for += but not <<.
I'm pretty new at Ruby, but so far I haven't found it necessary to do anything special in this regard (that is, beyond what I just tend to do as a programmer generally). Maybe this is because memory is cheaper than the time it would take to seriously optimize for it (my Ruby code runs on machines with 4-12 GB of RAM). It might also be because the jobs I'm using it for are not long-running (i.e. it's going to depend on your application).
I'm using Python, but I guess the strategies are similar.
I try to use small functions/methods, so that local variables get automatically garbage collected when you return to the caller.
In larger functions/methods I explicitly delete large temporary objects (like lists) when they are no longer needed. Closing resources as early as possible might help too.
Something to keep in mind is the life cycle of your objects. If you're objects are not passed around that much, the garbage collector will eventually kick in and free them up. However, if you keep referencing them it may require some cycles for the garbage collector to free them up. This is particularly true in Ruby 1.8, where the garbage collector uses a poor implementation of the mark and sweep technique.
You may run into this situation when you try to apply some "design patterns" like decorator that keep objects in memory for a long time. It may not be obvious when trying example in isolation, but in real world applications where thousands of objects are created at the same time the cost of memory growth will be significant.
When possible, use arrays instead of other data structures. Try not to use floats when integers will do.
Be careful when using gem/library methods. They may not be memory optimized. For example, the Ruby PG::Result class has a method 'values' which is not optimized. It will use a lot of extra memory. I have yet to report this.
Replacing malloc(3) implementation to jemalloc will immediately decrease your memory consumption up to 30%. I've created 'jemalloc' gem to achieve this instantly.
'jemalloc' GEM: Inject jemalloc(3) into your Ruby app in 3 min
I try to keep arrays & lists & datasets as small as possible. The individual object do not matter much, as creation and garbage collection is pretty fast in most modern languages.
In the cases you have to read some sort of huge dataset from the database, make sure to read in a forward/only manner and process it in little bits instead og loading everything into memory first.
dont use a lot of symbols, they stay in memory until the process gets killed.. this because symbols never get garbage collected.

Resources