"Nullify" a variable in ruby [duplicate] - ruby

This question already has an answer here:
Secure erasing of password from memory in Ruby
(1 answer)
Closed 2 years ago.
I have a use case where I need to dispose of some data the very moment I don't need it anymore, for security reasons.
I am writing a server in Ruby that deals with logins and passwords. I use BCrypt to store passwords in my database. My server receives the password, makes a bcrypt hash out of it, and then doesn't use the original password anymore.
I know of a kind of cyberattacks that involves stealing data right from RAM, and I am concerned that an attacker might steal a user's password in raw string form in the period of time that the password is still in memory. I am not sure if simply using password_in_string_form = nil would be enough.
I want to nullify the variable that holds the user's password the moment I am done with it. By nullify I mean something akin to using /dev/null to fill something with zeroes. The end goal is irreversible destruction of data.

I am not sure if simply using password_in_string_form = nil would be enough.
No, it would not be enough. The object might or might not be garbage collected immediately, and even if it was, that does not cause the contents to be erased from memory.
However, unless they have been frozen, Ruby strings are mutable. Thus, as long as you do not freeze the password string, you can replace its contents with zeroes, or random characters, or whatever before you let go of it. In particular, this should work, subject a few provisos, covered later:
(0 ... password_in_string_form.length).each do |i|
password_in_string_form[i] = ' '
end
But care needs to be exercised, for this approach, which may seem more idomatic, does not work:
# SURPRISE! This does not reliably remove the password from memory!
password_in_string_form.replace(' ' * password_in_string_form.length)
Rather than updating the target string's contents in-place, replace() releases the contents to Ruby's internal allocator (which does not modify them), and chooses a strategy for the new contents based on details of the replacement.
The difference in effect between those two approaches should be a big warning flag for you, however. Ruby is a pretty high-level language. It gives you a lot of leverage, but at the cost of control over fine details, such as whether and how long data are retained in memory.
And that brings me to the provisos. Here are the main ones:
As you handle the password string, you must take care to avoid making copies of it or of any part of it, or else to capture all the copies and trash them, too. That will take some discipline and attention to detail, because it is very easy to make such copies.
Trashing the password string itself may not be enough to achieve your objective. You also need to trash any other copies of password in memory, such as from upstream of isolating the password string. If yours is a web application, for instance, that would include the contents of the HTTP request in which the password was delivered to your application, and probably more strings derived from it than just the isolated password string. Similar applies to other kinds of applications.
passwords may not be the only thing you need to protect. If an adversary is in a position where they can steal passwords from the host machine's memory, then they are also in position to steal the sensitive data that users access after logging in.
For these and other reasons, if the security requirements for your server dictate that in-memory copies of user passwords be destroyed as soon as they are no longer needed, then (pure) Ruby may not be an appropriate implementation language.
On the other hand, if an adversary obtains sufficient access to scrape passwords from memory / swap, then it's probably game over already. At minimum, they will have access to everything your application can access. That doesn't make the passwords altogether moot, but you should take it into consideration in your evaluation of how much effort to devote to this issue.

This is not possible in Ruby.
You will have to write some code specific to each implementation (Opal, TruffleRuby, JRuby, Rubinius, MRuby, YARV, etc.) to ensure that. Depending on the implementation, it may not even be possible to do inside of the managed memory of the implementation at all, without having a separate piece of memory that you manage yourself.
I.e. you will probably need to have some tiny piece of native code that manages its own tiny piece of native memory and injects it into your Ruby program.

Related

How to protect a random number seed?

I'm writing an application to protect passwords from key sniffers and screen retrievers. I have the user type in an easy-to-remember keyword or phrase (i.e, "password123", "amazon.com", "gmail") and I use that string to create a longer and stronger password which is loaded into the clipboard. I want the application to be completely anonymous, so I don't save any information. To generate the passwords, I use a random number generator. I need a way for the user to carry around their seed that isn't vulnerable to key sniffers or screen retrievers. I'm thinking a hardware token like a YubiKey, but I would like something more easier and more mainstream. I tried using behavioral biometrics, but I managed to replicate them with a program too easily. Any better ideas?
What you are suggesting is a more than vulnerable approach.
First of all, there are open source and proven-correct algorithms and applications for the problem you are targeting. In security questions it is never a good idea to go and develop applications for critical operations (and handling passwords is always a critical operation) on your own, especially reinventing the wheel is almost in every case an endeavor doomed to fail.
Your approach is problematic in several points:
To be anonymous the app needs to copy/paste or in-place-generate the password needed for some action. You will have a hard time avoiding screen retrievers capture that if you do not do some magic on OS level.
Using one(!) random seed to protect several passwords makes each of them weaker than it was before.
Carrying this random seed on a usb key and freely plugging it into all kinds of computers that you cannot control is a problem as each of them may be potentially malicious. The random seed could be silently retrieved, altered or deleted.
To give you some things to get paranoid about, google e.g. blue pill and you will see that the real problems dwell on another machine layer than the application you are talking about.
Instead have a look at the following approaches:
2 factor authentication (2FA) against malicious software and hardware stealing your passwords on type-in. See e.g. Google Authenticator.
Secure operating systems against such software entering your system and retrieving your passwords. See e.g. QubesOS
Read-only drives with secure / anonymous OS for usage on foreign and potentially dangerous machines even for very critical tasks such as banking. See e.g. Tails OS on a dvd (not a usb key!)
Virtual machines to capsule potentially malicious tasks. See e.g. VirtualBox
Trustable password safes like KeyPassX
In a nutshell: You can write such an application but it will most likely not be practical nor secure nor by so usable. Sorry about that.

Secure erasing of password from memory in Ruby

I'm writing a Ruby application that will need to handle a user's enterprise password. I'd like to minimize the time the password is in memory to reduce the likelihood of the password being exposed.
In a native language, I would directly erase the data. In C#, I would use the SecureString class. In Java, I'd use char[]. But the best that I can find for Ruby is an old feature request that seems dead.
What is the standard for securely storing and erasing passwords from memory in Ruby? Is there a class that does this? A coding pattern similar to the char[] of Java?
A ruby issue exists for 5 years now (5741), regarding secure erasure of secrets from memory. That issue contains also some links which explain, why it is a good thing to erase passwords from memory. Lately MacOs did have an issue with FileVault2, because the password was stored within memory.
One possible solution shown within issue 5741 is:
pass = ""
$stdin.sysread(256, pass) # assuming a line-buffered terminal
io = StringIO.new("\0" * pass.bytesize)
io.read(pass.bytesize, pass)
It seems to work with ruby 2.3.1p112, but I can't promise it.

What does 'dirty-flag' / 'dirty-values' mean?

I see some variables named 'dirty' in some source code at work and some other code. What does it mean? What is a dirty flag?
Generally, dirty flags are used to indicate that some data has changed and needs to eventually be written to some external destination. It isn't written immediate because adjacent data may also get changed and writing bulk of data is generally more efficient than writing individual values.
There's a deeper issue here - rather than "What does 'dirty mean?" in the context of code, I think we should really be asking - is 'dirty' an appropriate term for what is generally intended.
'Dirty' is potentially confusing and misleading. It will suggest to many new programmers corrupt or erroneous form data. The work 'dirty' implies that something is wrong and that the data needs to be purged or removed. Something dirty is, after all undesirable, unclean and unpleasant.
If we mean 'the form has been touched' or 'the form has been amended but the changes haven't yet been written to the server', then why not 'touched' or 'writePending' rather than 'dirty'?
That I think, is a question the programming community needs to address.
Dirty could mean a number of things, you need to provide more context. But in a very general sense a "dirty flag" is used to indicate whether something has been touched / modified.
For instance, see usage of "dirty bit" in the context of memory management in the wiki for Page Table
"Dirty" is often used in the context of caching, from application-level caching to architectural caching.
In general, there're two kinds of caching mechanisms: (1) write through; and (2) write back. We use WT and WB for short.
WT means that the write is done synchronously both to the cache and to the backing store. (By saying the cache and the backing store, for example, they can stand for the main memory and the disk, respectively, in the context of databases).
In contrast, for WB, initially, writing is done only to the cache. The write to the backing store is postponed until the cache blocks containing the data are about to be modified/replaced by new content.
The data is the dirty values. When implementing a WB cache, you can set dirty bits to indicate whether a cache block contains dirty value or not.

Algorithm to validate instruction execution order

I'd like to write some simple code that helps to determine if some instructions have been executed in the intended order client-side. This is to make things difficult for anyone wishing to alter behaviour by editing byte code. For example, using a JMP so some instructions are never executed. I'm a bit short on ideas though.
To check if the last two instructions have been run in the correct order something simple like this could be used (pseudo code):
// Variables initialized by server
int lastInt;
// Monitored at regular intervals
// Saves using callback which could be tampered with
boolean bSomethingFishyHere;
int array[20]
...
execute( array[5], doStuff1() )
execute( array[6], doStuff2() )
...
// This could be tested remotely with all combinations of values possible
execute( int i, boolean b ){
if( lastInt >= i ){
bSomethingFishyHere = true;
}
lastInt = i;
}
I'm at a loss at to what approach could be used to verify if all instructions have been run in the correct order though. Maybe I could add an array and have it populated by the server with some randomly ascending numbers or use some sort checksum. What are your suggestions?
The problem is, that no matter what kind of book-keeping you do, a malicious user can always do the same book-keeping, but skip over the actual doing stuff. If you can do it, so can they. You can rely on external mechanisms, like code-signing to ensure that your executable hasn't been tampered with and CPU protections to prevent on-the-fly modification of the code in memory. But in that case you're only as secure as the platform you're running on.
I'm assuming this is some sort of copy-protection scheme. (If not, feel free to correct me, and you might get some better, more applicable advice). There isn't a fool-proof way to prevent someone from running your software, but you can license an existing scheme where the vendor has already put enough effort into it, that it's not worth it for an attacker to bother, for the most part.
The one way that is pretty much fool-proof, is if you control the code. Run the real meat of the code on your servers, and provide some sort of front end remote client.
This is just to patch some holes in an fps shooter. The designers of the game left some temporary variables that can be changed in the console. Some of them are harmless but others like Texture transparent=true are abusive. What I'm aiming for is to redesign an existing modification so most of the code is on the server as you suggest. The variables in question are set in the "world" that is mimicked by the client. Ultimately, I'm planning to extend some classes so they ignore them and just need to monitor values where this isn't possible.
If you do want a short-term patch, a more practical approach (than the one you are looking at) would to send encrypted bytecodes to the client and using a special classloader to decrypt them on the fly. Beware however that it wouldn't be that difficult for a hacker to reverse engineer the classloader, get hold of the client-side bytecodes, and modify them to install the cheats.
So my advice is that any client-side "patch" to stop users tampering with the bytecodes is never going to be hack-proof. Skip that idea, and go straight to your long term solution of rearchitecting the game so that it is not necessary to trust that the client-side code plays by the rules.

What's the purpose of tainting Ruby objects?

I'm aware of the possibility to mark untrusted objects as tainted, but what's the underlying purpose and why should I do it?
One tracks taint as a security precaution, in order to ensure that untrusted data isn't mistakenly used for calculations, transactions, or interpreted as code.
Tracking taint via a built-in language feature is more clear and more reliable than tracking via coding conventions or relying on code review.
For example, input from the user can generally be considered 'untrusted' until it has been sanitized properly for insertion into the database. By marking the input as tainted, Ruby ensures satisfactory sanitation takes place and prevents a potential SQL injection attack.
For an example of an "ancient" (2005) coding practice that demonstrates how taint was tracked without such Perl and Ruby modules, read some good old Joel:
http://www.joelonsoftware.com/articles/Wrong.html
It used to be a pretty standard practice when writing CGIs in Perl. There is even a FAQ on it. The basic idea was that the run time could guarantee that you did not implicitly trust a tainted value.

Resources