How to safely let users run arbitrary Ruby code? - ruby

I realize this sounds a little crazy, but I'm working on a project for which I need a server to run user-provided Ruby code and return the result.
I'm looking to prevent something like this:
system("rm -rf /")
eval("something_evil")
# etc...
I'm sure there must be some reasonably safe way to do this, as it already exists at places like tryruby.org. Any help is greatly appreciated, thanks!

Three suggestions:
1) Take a look at Ruby taint levels. This provides some degree of protection against, eval('evil_code') type things, etc.
2) Unless user's actually need access to the local file system, use something like fakefs
3) No matter what else you do follow Tronic's suggestion (can be a pain to setup, but limited chroot jails are about the only way to make absolutely sure that user's cannot access resources you don't explicitly want them to).

Run the program ptraced with a whitelist of allowed syscalls, as user/group nobody, with resource limits (memory usage etc), in a minimal chroot.

A "blank slate" is an object stripped of (most of) its methods.
A "clean room" is an object within which you evaluate potentially unsafe room.
If you evaluate the code in a "clean room" which is also a "blank slate," cranking the safe level up as high as it will go, you will afford yourself a great deal of protection. Nothing in security is sure, so this should be considered a layer in your security, not necessarily the only layer.
This answer shows how to do it.

I had the same problem but then came across eval.so and decided to write an API wrapper for it, called Sandie. It's as easy as:
sandie = Sandie.new(language: 'ruby')
# => #<Sandie:0x00000002e30650>
sandie.evaluate(code: 'puts "hello world"')
# => {"stdout"=>"hello world\n", "stderr"=>"", "wallTime"=>487, "exitCode"=>0}
It also supports a whole lot of other languages as well like C#, Perl, Lua, and Java.

Related

Why is eval evil in makefiles

I have had several people tell me at this point that eval is evil in makefiles. I originally took their word for it, but now I'm starting to question it. Take the following makefile:
%.o:
$(eval targ=$*)
echo making $targ
%.p:
echo making $*
I understand that if you then did make "a;blah;".o, then it would run blah (Which could be an rm -rf \, or worse). However, if you ran make "a;blah;".p you would get the same result without the eval. Furthermore, if you have permissions to run make, you would also have permissions to run blah directly as well, and wouldn't need to run make at all. So now I'm wondering, is eval really an added security risk in makefiles, and if so, what should be avoided?
Why is eval evil?
Because it grants a whole power of language to things you actually don't want to give that power.
Often it is used as "poor man's metaprogramming" to construct some piece of code and then run it. Often it looks like eval("do stuff with " + thing) - and thing is only known during runtime, because it gets supplied from outside.
However, if you don't make sure that thing belongs to some tiny subset of language you need in that particular case (like, is a string representation of one valid name), your code would grant permissions to stuff you didn't intend to. For example, if thing is "apples; steal all oranges" then oranges would be stolen.
If you do make sure that thing belongs to some subset of language you actually need then 2 problems arise:
You are reimplementing language features (parsing source) which is not DRY and is often a sign of misusing a language.
If you resort to this that means simpler means are not suitable and your use case is somewhat complicated which makes validating your input harder.
Thus, it's really easy to break security with eval and taking enough precautions to make it safe is hard, that's why if you see an eval you should suspect possible security flaw. That's just a heuristic, not a law.
eval is a very powerful tool - as powerful as the whole language - and it's too easy to shoot your leg off with it.
Why this particular use of eval is not good?
Imagine a task that requires making some steps that depend on a file. Task can be done with various files. (like, user gives Virtualbox image of a machine that is to be brought up and integrated into existing network infrastructure)
Imagine, say, lazy administrator that automated this task - all commands are written in a makefile because it fits better than sh script (some steps depend on other and sometimes don't need to be re-done).
Administrator made sure that all commands are ok and correct and had given sudoers permission to run make with that specific makefile. Now, if makefile contains string like yours then using properly crafted name for your Virtualbox image you could pwn the system, or something like that.
Of course, I had to stretch far to make this particular case a problem, but it's a potential problem anyway.
Makefiles usually offer simple contracts: you name the target and some very specific stuff - written in makefile - gets done. Using eval the way you've used it offers a different contract: the same stuff as above but you also can supply commands in some complicated way and they would get executed too.
You could try patching the contract by making sure that $* would not cause any trouble. Describing what that means exactly could be an interesting exercise in language if you want to keep as much flexibility in target names as possible.
Otherwise, you should be aware of extended contract and don't use solutions like this in cases where that extension would cause problems. If you intend your solution to be reusable by as many people as possible, you should make its contract cause as little problems as possible, too.

Is checking Perl function arguments worth it?

There's a lot of buzz about MooseX::Method::Signatures and even before that, modules such as Params::Validate that are designed to type check every argument to methods or functions. I'm considering using the former for all my future Perl code, both personal and at my place of work. But I'm not sure if it's worth the effort.
I'm thinking of all the Perl code I've seen (and written) before that performs no such checking. I very rarely see a module do this:
my ($a, $b) = #_;
defined $a or croak '$a must be defined!';
!ref $a or croak '$a must be a scalar!";
...
#_ == 2 or croak "Too many arguments!";
Perhaps because it's simply too much work without some kind of helper module, but perhaps because in practice we don't send excess arguments to functions, and we don't send arrayrefs to methods that expect scalars - or if we do, we have use warnings; and we quickly hear about it - a duck typing approach.
So is Perl type checking worth the performance hit, or are its strengths predominantly shown in compiled, strongly typed languages such as C or Java?
I'm interested in answers from anyone who has experience writing Perl that uses these modules and has seen benefits (or not) from their use; if your company/project has any policies relating to type checking; and any problems with type checking and performance.
UPDATE: I read an interesting article on the subject recently, called Strong Testing vs. Strong Typing. Ignoring the slight Python bias, it essentially states that type checking can be suffocating in some instances, and even if your program passes the type checks, it's no guarantee of correctness - proper tests are the only way to be sure.
If it's important for you to check that an argument is exactly what you need, it's worth it. Performance only matters when you already have correct functioning. It doesn't matter how fast you can get a wrong answer or a core dump. :)
Now, that sounds like a stupid thing to say, but consider some cases where it isn't. Do I really care what's in #_ here?
sub looks_like_a_number { $_[0] !~ /\D/ }
sub is_a_dog { eval { $_[0]->DOES( 'Dog' ) } }
In those two examples, if the argument isn't what you expect, you are still going to get the right answer because the invalid arguments won't pass the tests. Some people see that as ugly, and I can see their point, but I also think the alternative is ugly. Who wins?
However, there are going to be times that you need guard conditions because your situation isn't so simple. The next thing you have to pass your data to might expect them to be within certain ranges or of certain types and don't fail elegantly.
When I think about guard conditions, I think through what could happen if the inputs are bad and how much I care about the failure. I have to judge that by the demands of each situation. I know that sucks as an answer, but I tend to like it better than a bondage-and-discipline approach where you have to go through all the mess even when it doesn't matter.
I dread Params::Validate because its code is often longer than my subroutine. The Moose stuff is very attractive, but you have to realize that it's a way for you to declare what you want and you still get what you could build by hand (you just don't have to see it or do it). The biggest thing I hate about Perl is the lack of optional method signatures, and that's one of the most attractive features in Perl 6 as well as Moose.
I basically concur with brian. How much you need to worry about your method's inputs depends heavily on how much you are concerned that a) someone will input bad data, and b) bad data will corrupt the purpose of the method. I would also add that there is a difference between external and internal methods. You need to be more diligent about public methods because you're making a promise to consumers of your class; conversely you can be less diligent about internal methods as you have greater (theoretical) control over the code that accesses it, and have only yourself to blame if things go wrong.
MooseX::Method::Signatures is an elegant solution to adding a simple declarative way to explain the parameters of a method. Method::Signatures::Simple and Params::Validate are nice but lack one of the features I find most appealing about Moose: the Type system. I have used MooseX::Declare and by extension MooseX::Method::Signatures for several projects and I find that the bar to writing the extra checks is so minimal it's almost seductive.
Yes its worth it - defensive programming is one of those things that are always worth it.
The counterargument I've seen presented to this is that checking parameters on every single function call is redundant and a waste of CPU time. This argument's supporters favor a model in which all incoming data is rigorously checked when it first enters the system, but internal methods have no parameter checks because they should only be called by code which will pass them data which has already passed the checks at the system's border, so it is assumed to still be valid.
In theory, I really like the sound of that, but I can also see how easily it can fall like a house of cards if someone uses the system (or the system needs to grow to allow use) in a way that was unforeseen when the initial validation border is established. All it takes is one external call to an internal function and all bets are off.
In practice, I'm using Moose at the moment and Moose doesn't really give you the option to bypass validation at the attribute level, plus MooseX::Declare handles and validates method parameters with less fuss than unrolling #_ by hand, so it's pretty much a moot point.
I want to mention two points here.
The first are the tests, the second the performance question.
1) Tests
You mentioned that tests can do a lot and that tests are the only way
to be sure that your code is correct. In general i would say this is
absolutly correct. But tests itself only solves one problem.
If you write a module you have two problems or lets say two different
people that uses your module.
You as a developer and a user that uses your module. Tests helps with the
first that your module is correct and do the right thing, but it didn't
help the user that just uses your module.
For the later, i have one example. i had written a module using Moose
and some other stuff, my code ended always in a Segmentation fault.
Then i began to debug my code and search for the problem. I spend around
4 hours of time to find the error. In the end the problem was that i have
used Moose with the Array Trait. I used the "map" function and i didn't
provide a subroutine function, just a string or something else.
Sure this was an absolutly stupid error of mine, but i spend a long time to
debug it. In the end just a checking of the input that the argument is
a subref would cost the developer 10 seconds of time, and would cost me
and propably other a lot of more time.
I also know of other examples. I had written a REST Client to an interface
completly OOP with Moose. In the end you always got back Objects, you
can change the attributes but sure it didn't call the REST API for
every change you did. Instead you change your values and in the end you
call a update() method that transfers the data, and change the values.
Now i had a user that then wrote:
$obj->update({ foo => 'bar' })
Sure i got an error back, that update() does not work. But sure it didn't
work, because the update() method didn't accept a hashref. It only does
a synchronisation of the actual state of the object with the online
service. The correct code would be.
$obj->foo('bar');
$obj->update();
The first thing works because i never did a checking of the arguments. And i don't throw an error if someone gives more arguments then i expect. The method just starts normal like.
sub update {
my ( $self ) = #_;
...
}
Sure all my tests absolutely works 100% fine. But handling these errors that
are not errors cost me time too. And it costs the user propably a lot
of more time.
So in the end. Yes, tests are the only correct way to ensure that your code
works correct. But that doesn't mean that type checking is meaningless.
Type checking is there to help all your non-developers (on your module)
to use your module correctly. And saves you and others time finding
dump errors.
2) Performance
The short: You don't care for performance until you care.
That means until your module works to slow, Performance is always fast
enough and you don't need to care for this. If your module really works
to slow you need further investigations. But for these investigions
you should use a profiler like Devel::NYTProf to look what is slow.
And i would say. In 99% slowliness is not because you do type
checking, it is more your algorithm. You do a lot of computation, calling
functions to often etc. Often it helps if you do completly other solutions
use another better algorithm, do caching or something else, and the
performance hit is not your type checking. But even if the checking is the
performance hit. Then just remove it where it matters.
There is no reason to leave the type checking where performance don't
matters. Do you think type checking does matter in a case like above?
Where i have written a REST Client? 99% of performance issues here are
the amount of request that goes to the webservice or the time for such an
request. Don't using type checking or MooseX::Declare etc. would propably
speed up absolutly nothing.
And even if you see performance disadvantages. Sometimes it is acceptable.
Because the speed doesn't matter or sometimes something gives you a greater
value. DBIx::Class is slower then pure SQL with DBI, but DBIx::Class
gives you a lot for these.
Params::Validate works great,but of course checking args slows things down. Tests are mandatory(at least in the code I write).
Yes it's absolutely worth it, because it will help during development, maintenance, debugging, etc.
If a developer accidentally sends the wrong parameters to a method, a useful error message will be generated, instead of the error being propagated down to somewhere else.
I'm using Moose extensively for a fairly large OO project I'm working on. Moose's strict type checking has saved my bacon on a few occassions. Most importantly it has helped avoid situations where "undef" values are incorrectly being passed to the method. In just those instances alone it saved me hours of debugging time..
The performance hit is definitely there, but it can be managed. 2 hours of using NYTProf helped me find a few Moose Attributes that I was grinding too hard and I just refactored my code and got 4x performance improvement.
Use type checking. Defensive coding is worth it.
Patrick.
Sometimes. I generally do it whenever I'm passing options via hash or hashref. In these cases it's very easy to misremember or misspell an option name, and checking with Params::Check can save a lot of troubleshooting time.
For example:
sub revise {
my ($file, $options) = #_;
my $tmpl = {
test_mode => { allow => [0,1], 'default' => 0 },
verbosity => { allow => qw/^\d+$/, 'default' => 1 },
force_update => { allow => [0,1], 'default' => 0 },
required_fields => { 'default' => [] },
create_backup => { allow => [0,1], 'default' => 1 },
};
my $args = check($tmpl, $options, 1)
or croak "Could not parse arguments: " . Params::Check::last_error();
...
}
Prior to adding these checks, I'd forget whether the names used underscores or hyphens, pass require_backup instead of create_backup, etc. And this is for code I wrote myself--if other people are going to use it, you should definitely do some sort of idiot-proofing. Params::Check makes it fairly easy to do type checking, allowed value checking, default values, required options, storing option values to other variables and more.

Function before form or form before function?

What do you believe in? As an incomplete basis for a good product - would you prefer a mess of code that is horrible to look at but works perfectly for what its supposed to do, or a beautiful set of well organized classes (or something else if OO doesn't float your boat) but have buggy functionality that still needs a lot of work?
If you were just handed a project to improve and work towards completion, which would you prefer? And what do you put the emphasis on when starting a new project?
Well written code will be easier to debug. If the code is too messy, even if it has "no bugs" (unlikely) it will not be maintainable.
I prefer both. However, if I was going to be handed a project I would go with buggy but beautiful everytime.
If we're talking about abstractions, I'd prefer to have just a small base set of working features. While it might be nice to have a pre-built, working library on top of this, the user can always create greater functionality from that working base.
Maybe they don't look nice, but again, a user can always just create a wrapper to make things look nice. I vote function.
Real artists ship - so something, that is both beautiful and gets the job done. But when in doubt, err on the "gets the job done" side, even if it isn't perfection.

What are the things you would like improved in the Ruby language?

What are the things you wish Ruby (and more generally the Ruby community) would improve?
I read somewhere that Ruby is the love-child of Smalltalk and LISP, with Miss Perl as the Nanny.
I have a lot of respect for Ruby's parents, but I'm not sure I like the influence Miss Perl had on the child. Specifically, I don't like the predefined variables: I need a cheat sheet to know what they mean. You could say "just don't use them". Well, I don't... but other people do. And when I download a plugin on the Web, I have no choice but to fetch my cheat-sheet if I ever need to go and touch the source code. I just wish they would remove those from the language itself.
Also, I think that Ruby is too much of a moving target. My code breaks on every new Ruby upgrade, even on minor releases. This is true also of Ruby on Rails and most Rails plugins I have worked with: they just change all the time, and nobody seems to care whether the changes break everything or not. IMHO, although I love a lot of things in Ruby, this lack of stability is almost a show-stopper.
I wish people would consider backward compatibility between minor releases as an unbreakable rule when releasing a new language (or library or framework) version.
I wish that some of the lesser used modules of the standard library were documented.
Make require-ing files less painful. Don't ask me how, but maybe have one file dedicated to knowing the paths involved and just get rid of the relative path crud from everything else.
Getting rid of the artificial distinction between Modules and Classes would be nice.
Both Modules and Classes are Namespaces. Modules are also Mixins, while Classes aren't. Classes can also be instantiated while Modules can't. This distinction is unnecessary. Just get rid of Modules and allow Classes to be used as Mixins.
An example of a language where this works is Newspeak.
I'd appreciate being able to install ruby 1.9 as an RPM rather than having to use the source.
Make Ruby completely Message Sending based, get rid of everything that is not a message send: local variables, global variables, instance variables, class hierarchy variables, constants, magic globals, magic constants, builtin operators, builtin keywords, even literals. See Self, Ioke or Newspeak for the incredible power and elegance this gains.
I wish they would get rid of the predefined variables: $!, $&, $+, etc.
I would like to have support for static compile-time metaprogramming. The Converge Programming Language might be a good starting point.
Replace the Mixin system with a Traits system.
Replace Exceptions with a Common Lisp style Conditions system.

How can I program defensively in Ruby?

Here's a perfect example of the problem: Classifier gem breaks Rails.
** Original question: **
One thing that concerns me as a security professional is that Ruby doesn't have a parallel of Java's package-privacy. That is, this isn't valid Ruby:
public module Foo
public module Bar
# factory method for new Bar implementations
def self.new(...)
SimpleBarImplementation.new(...)
end
def baz
raise NotImplementedError.new('Implementing Classes MUST redefine #baz')
end
end
private class SimpleBarImplementation
include Bar
def baz
...
end
end
end
It'd be really nice to be able to prevent monkey-patching of Foo::BarImpl. That way, people who rely on the library know that nobody has messed with it. Imagine if somebody changed the implementation of MD5 or SHA1 on you! I can call freeze on these classes, but I have to do it on a class-by-class basis, and other scripts might modify them before I finish securing my application if I'm not very careful about load order.
Java provides lots of other tools for defensive programming, many of which are not possible in Ruby. (See Josh Bloch's book for a good list.) Is this really a concern? Should I just stop complaining and use Ruby for lightweight things and not hope for "enterprise-ready" solutions?
(And no, core classes are not frozen by default in Ruby. See below:)
require 'md5'
# => true
MD5.frozen?
# => false
I don't think this is a concern.
Yes, the mythical "somebody" can replace the implementation of MD5 with something insecure. But in order to do that, the mythical somebody must actually be able to get his code into the Ruby process. And if he can do that, then he presumably could also inject his code into a Java process and e.g. rewrite the bytecode for the MD5 operation. Or just intercept the keypresses and not actually bother with fiddling with the cryptography code at all.
One of the typical concerns is: I'm writing this awesome library, which is supposed to be used like so:
require 'awesome'
# Do something awesome.
But what if someone uses it like so:
require 'evil_cracker_lib_from_russian_pr0n_site'
# Overrides crypto functions and sends all data to mafia
require 'awesome'
# Now everything is insecure because awesome lib uses
# cracker lib instead of builtin
And the simple solution is: don't do that! Educate your users that they shouldn't run untrusted code they downloaded from obscure sources in their security critical applications. And if they do, they probably deserve it.
To come back to your Java example: it's true that in Java you can make your crypto code private and final and what not. However, someone can still replace your crypto implementation! In fact, someone actually did: many open-source Java implementations use OpenSSL to implement their cryptographic routines. And, as you probably know, Debian shipped with a broken, insecure version of OpenSSL for years. So, all Java programs running on Debian for the past couple of years actually did run with insecure crypto!
Java provides lots of other tools for defensive programming
Initially I thought you were talking about normal defensive programming,
wherein the idea is to defend the program (or your subset of it, or your single function) from invalid data input.
That's a great thing, and I encourage everyone to go read that article.
However it seems you are actually talking about "defending your code from other programmers."
In my opinion, this is a completely pointless goal, as no matter what you do, a malicious programmer can always run your program under a debugger, or use dll injection or any number of other techniques.
If you are merely seeking to protect your code from incompetent co-workers, this is ridiculous. Educate your co-workers, or get better co-workers.
At any rate, if such things are of great concern to you, ruby is not the programming language for you. Monkeypatching is in there by design, and to disallow it goes against the whole point of the feature.
Check out Immutable by Garry Dolley.
You can prevent redefinition of individual methods.
I guess Ruby has that a feature - valued more over it being a security issue. Ducktyping too.
E.g. I can add my own methods to the Ruby String class rather than extending or wrapping it.
"Educate your co-workers, or get better co-workers" works great for a small software startup, and it works great for the big guns like Google and Amazon. It's ridiculous to think that every lowly developer contracted in for some small medical charts application in a doctor's office in a minor city.
I'm not saying we should build for the lowest common denominator, but we have to be realistic that there are lots of mediocre programmers out there who will pull in any library that gets the job done, paying no attention to security. How could they pay attention to security? Maybe the took an algorithms and data structures class. Maybe they took a compilers class. They almost certainly didn't take an encryption protocols class. They definitely haven't all read Schneier or any of the others out there who practically have to beg and plead with even very good programmers to consider security when building software.
I'm not worried about this:
require 'evil_cracker_lib_from_russian_pr0n_site'
require 'awesome'
I'm worried about awesome requiring foobar and fazbot, and foobar requiring has_gumption, and ... eventually two of these conflict in some obscure way that undoes an important security aspect.
One important security principle is "defense in depth" -- adding these extra layers of security help you from accidentally shooting yourself in the foot. They can't completely prevent it; nothing can. But they help.
If monkey patching is your concen, you can use the Immutable module (or one of similar function).
Immutable
You could take a look at Why the Lucky Stiff's "Sandbox"project, which you can use if you worry about potentially running unsafe code.
http://code.whytheluckystiff.net/sandbox/
An example (online TicTacToe):
http://www.elctech.com/blog/safely-exposing-your-app-to-a-ruby-sandbox
Raganwald has a recent post about this. In the end, he builds the following:
class Module
def anonymous_module(&block)
self.send :include, Module.new(&block)
end
end
class Acronym
anonymous_module do
fu = lambda { 'fu' }
bar = lambda { 'bar' }
define_method :fubar do
fu.call + bar.call
end
end
end
That exposes fubar as a public method on Acronyms, but keeps the internal guts (fu and bar) private and hides helper module from outside view.
If someone monkeypatched an object or a module, then you need to look at 2 cases: He added a new method. If he is the only one adding this meyhod (which is very likely), then no problems arise. If he is not the only one, you need to see if both methods do the same and tell the library developer about this severe problem.
If they change a method, you should start to research why the method was changed. Did they change it due to some edge case behaviour or did they actually fix a bug? especially in the latter case, the monkeypatch is a god thing, because it fixes a bug in many places.
Besides that, you are using a very dynamic language with the assumption that programmers use this freedom in a sane way. The only way to remove this assumption is not to use a dynamic language.

Resources