Ruby tempfile corruption of binary files - ruby

after a lot of digging around I've found that RubyZip can corrupt binary files. After taking a closer look, it seems like Tempfile class can't correctly re-open binary files. To demonstrate the effect take the following script:
require 'tempfile'
tmp = Tempfile.new('test.bin', Dir.getwd)
File.open('test.bin', 'rb') { |h| IO.copy_stream(h, tmp) } # => 2
# 2 is the expected number of bytes
tmp.close
# temporary file (looking in OS) now really IS 2 bytes in size
tmp.open
# temporary file (looking in OS) now is 1 byte in size
tmp.binmode
# temporary file (looking in OS) still has the wrong number of bytes (1)
tmp.read.length # => 1
# And here is the problem I keep bumping into
The test.bin file I'm using only contains two bytes: 00 1a. After corruption of the temporary file, it contains 1 byte: 00. If it matters I'm running windows.
Is there something that I'm missing? Is this intentional behavior? And if so, is there a way to prevent this behavior?
Thank you

The instance open method is documented as:
Opens or reopens the file with mode r+.
This means you can't depend on that method to open it in the correct mode. That's not a big deal since the normal use of Tempfile is different:
tmp = Tempfile.new('test.bin', Dir.getwd)
File.open('test.bin', 'rb') { |h| IO.copy_stream(h, tmp) } # => 2
tmp.rewind
Now once it's been "rewound" you can read any data you want from it starting from the beginning.

Related

IO read not reading entire file

I have a very large text file, 958 MBAnd I have created the following script
f = IO.read ("Playback.xml").encode ("utf-8", replace: nil)
separate_files_array = f.scan /strong text<Bla>.*?<\/Bla>/
counter=0
separate_files_array.each do |x|
.
.
.
end
The following code only iterates over the first 31 occurences of that regex - and I have no idea why.
No, there is no way these are all the occurrences, I could see its not, and the script runs for a few seconds - this makes no sense for a file that size
The problem is IO.read is creating a buffer on default - and loading only part of the file to cache - In the end I used the following to answer my question
Regexp search through a very large file
the reason is because File.read is not creating a buffer on default - which when using a too big a file can cause the program to crush.

Ruby file writes in windows returning wrong file sizes?

I'm still learning ruby, so I'm sure I'm doing something wrong here, but using ruby 1.9.3 on windows, I'm having a problem writing a file with random ascii garbage to be a specific size. I need to be able to write these files for a test on an application I'm QAing. On Mac and on *nix, the file size is written correctly every time. But on windows, it generates files of random size, generally between 1,024 bytes and 1,031 bytes.
I'm sure the problem is one of the characters that the rstr is generating is counting as two characters but... it seems like this shouldn't happen.
Here is my code:
num = 10
k = 1
for i in 1..num
fname = "f#{i}.txt"
f = File.new(fname, "w")
for k in 1..size
rstr = "#{(1..1024).map{rand(255).chr}.join}"
f.write rstr
print " #{rstr.size} " # this returns 1024 every time.
rstr = ""
end
f.close
end
Also tried:
opts = {}
opts[:encoding] = "UTF-8"
fname = "f#{i}.txt"
f = File.new(fname, "w", opts)
By default files open in Windows are open with text mode meaning that line endings and other details are adjusted.
If you want the files be written byte-to-byte exactly as you want, you need to open the files in binary mode:
File.new("foo", "wb") do |f|
# ...
end
The b is a ignored on POSIX operating systems, so your scripts are now cross-platform compatible.
Note: I used block syntax to manage the file so it properly closes and disposes the file handler once the block is executed. You no longer need to worry about closing the file ;-)
Hope this helps.
There is not any 255 ASCII. The values goes from 0~254.
If you try to printf 255.chr, you'll get a multibyte character.
As Windows does not standard utf-8, you'll get incorrect values. Hence the problem you're facing!
Try adding #coding: utf-8 at the top of your file. It should get things working.

How can I securely erase a file?

Is there a Gem or means of securely erasing a file in Ruby? I'd like to avoid external programs that may not be present on the system.
By "secure erase" I'm referring to overwriting the file contents.
If you are on *nix, a pretty good way would be to just call shred using exec/open3/open4:
`shred -fxuz #{filename}`
http://www.gnu.org/s/coreutils/manual/html_node/shred-invocation.html
Check this similar post:
Writing a file shredder in python or ruby?
Something like this will get you started:
#!/usr/bin/env ruby
abort "Missing filename" if (ARGV.empty?)
ARGV.each do |filename|
filesize = File.size(filename)
[0x00, 0xff].each do |byte|
File.open(filename, 'wb') do |fo|
filesize.times { fo.print(byte.chr) }
end
end
end
It should get you close.
For more thoroughness, you could also use 0xaa and 0x55 for alternating 0 and 1 bits in the byte. Random.rand(0xff) will give you a random value from 0 to 255.
just
open the file
write some garbage at least in amount equal to current file size
flush() and close()
repeat N times, mixing garbage with zeroes and 0xff's on different passes

How can I achieve a Unix Tail operation without using files. In Ruby

I used Ruby to read an image file and save that into a string.
partial_image100 = File.read("image.tga")
partial_image99 = File.read("image.tga")
partial_image98 = File.read("image.tga")
...
I read those images at one end of a distributed system. In another system I want to do a Tail operation. The system receives just the images.
I have around a 100 partial images. I want to do a Tail operation, like this:
tail -c +19 image100 >> image99
tail -c +19 image99 >> image98
tail -c +19 image97 >> image96
...
Basically it just removes the first 18 bytes of the partial image and append what is left to the next image.
The problem is that this is slow. Calling 100 unix commands from Ruby is slow. I want to refactor this so that this happen in Ruby world. Just in memory. No files.
How can I do this in Ruby?
Thanks
edit:
The images are stored in a hash like this:
{"27"=>"\u0000\u0000\u0002\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u000E\u0001\xD0\a\xD0\a\u0018 \xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF\u0000\xFF\xFF...
EDIT:
You have all the relevant code here: https://gist.github.com/989563
There are two files. The code and a hash object encoded in json in a file. When you run the code there will be two image files created at /tmp
/tmp/image-tail-merger.tga – The output from the tail-merge algorithm
/tmp/image-/time/.tga – the output from the in-memory-tail algorithm
Currently the in-memory algorithm fails because the generated image is a Picasso.
If you manage to make the in-memory-algorithm generate the same image that the tail-merge algorithm do then you have succeeded.
EDIT:
I got it right finally!!!
Here is the code
https://gist.github.com/989563
I might look at File::Tail, similar to the Perl module.
File.open(filename) do |log|
log.extend(File::Tail)
log.interval = 10
log.backward(10)
log.tail { |line| puts line }
end
You can also monkey-patch your own File to use File::Tail as well for cleaner usage.
You may want to take a look at String#unpack (and its inverse Array#pack).
In your case some like that should do what you want:
trunked = image.unpack('#19c*').pack('c*')
You might try something like this
image100 = "some image string"
image99 = "some other image string"
image99 += image100.slice(0,19)
EDIT: In your specific example you could do this to iterate through the entire image
(image_hash.size..1).each do i
# Here we use slice to select everything *except* the first 19 bytes
# Note: To select just the first 19 bytes we could do slice(0,19)
# To select just the last 19 bytes we could do slice(-19,19)
# We then append this result to the next image down the line
image_hash[i-1] += image_hash[i].slice(19,image_hash[i].size-19)
end
If you want to remove the "tailed" bits permanently you can use slice! to do an inline replace.
Maybe a bit cleaner:
# Strip the headers
image_hash.each { |k,v| v.slice!(0,19) }
# Append them together
(image_hash.keys.sort).collect{ |i| image_hash[i] }.join
EDIT: Working code example https://gist.github.com/989563

How to search file text for a pattern and replace it with a given value

I'm looking for a script to search a file (or list of files) for a pattern and, if found, replace that pattern with a given value.
Thoughts?
Disclaimer: This approach is a naive illustration of Ruby's capabilities, and not a production-grade solution for replacing strings in files. It's prone to various failure scenarios, such as data loss in case of a crash, interrupt, or disk being full. This code is not fit for anything beyond a quick one-off script where all the data is backed up. For that reason, do NOT copy this code into your programs.
Here's a quick short way to do it.
file_names = ['foo.txt', 'bar.txt']
file_names.each do |file_name|
text = File.read(file_name)
new_contents = text.gsub(/search_regexp/, "replacement string")
# To merely print the contents of the file, use:
puts new_contents
# To write changes to the file, use:
File.open(file_name, "w") {|file| file.puts new_contents }
end
Actually, Ruby does have an in-place editing feature. Like Perl, you can say
ruby -pi.bak -e "gsub(/oldtext/, 'newtext')" *.txt
This will apply the code in double-quotes to all files in the current directory whose names end with ".txt". Backup copies of edited files will be created with a ".bak" extension ("foobar.txt.bak" I think).
NOTE: this does not appear to work for multiline searches. For those, you have to do it the other less pretty way, with a wrapper script around the regex.
Keep in mind that, when you do this, the filesystem could be out of space and you may create a zero-length file. This is catastrophic if you're doing something like writing out /etc/passwd files as part of system configuration management.
Note that in-place file editing like in the accepted answer will always truncate the file and write out the new file sequentially. There will always be a race condition where concurrent readers will see a truncated file. If the process is aborted for any reason (ctrl-c, OOM killer, system crash, power outage, etc) during the write then the truncated file will also be left over, which can be catastrophic. This is the kind of dataloss scenario which developers MUST consider because it will happen. For that reason, I think the accepted answer should most likely not be the accepted answer. At a bare minimum write to a tempfile and move/rename the file into place like the "simple" solution at the end of this answer.
You need to use an algorithm that:
Reads the old file and writes out to the new file. (You need to be careful about slurping entire files into memory).
Explicitly closes the new temporary file, which is where you may throw an exception because the file buffers cannot be written to disk because there is no space. (Catch this and cleanup the temporary file if you like, but you need to rethrow something or fail fairly hard at this point.
Fixes the file permissions and modes on the new file.
Renames the new file and drops it into place.
With ext3 filesystems you are guaranteed that the metadata write to move the file into place will not get rearranged by the filesystem and written before the data buffers for the new file are written, so this should either succeed or fail. The ext4 filesystem has also been patched to support this kind of behavior. If you are very paranoid you should call the fdatasync() system call as a step 3.5 before moving the file into place.
Regardless of language, this is best practice. In languages where calling close() does not throw an exception (Perl or C) you must explicitly check the return of close() and throw an exception if it fails.
The suggestion above to simply slurp the file into memory, manipulate it and write it out to the file will be guaranteed to produce zero-length files on a full filesystem. You need to always use FileUtils.mv to move a fully-written temporary file into place.
A final consideration is the placement of the temporary file. If you open a file in /tmp then you have to consider a few problems:
If /tmp is mounted on a different file system you may run /tmp out of space before you've written out the file that would otherwise be deployable to the destination of the old file.
Probably more importantly, when you try to mv the file across a device mount you will transparently get converted to cp behavior. The old file will be opened, the old files inode will be preserved and reopened and the file contents will be copied. This is most likely not what you want, and you may run into "text file busy" errors if you try to edit the contents of a running file. This also defeats the purpose of using the filesystem mv commands and you may run the destination filesystem out of space with only a partially written file.
This also has nothing to do with Ruby's implementation. The system mv and cp commands behave similarly.
What is more preferable is to open a Tempfile in the same directory as the old file. This ensures that there will be no cross-device move issues. The mv itself should never fail, and you should always get a complete and untruncated file. Any failures, such as device out of space, permission errors, etc., should be encountered during writing the Tempfile out.
The only downsides to the approach of creating the Tempfile in the destination directory are:
Sometimes you may not be able to open a Tempfile there, such as if you are trying to 'edit' a file in /proc for example. For that reason you might want to fall back and try /tmp if opening the file in the destination directory fails.
You must have enough space on the destination partition in order to hold both the complete old file and the new file. However, if you have insufficient space to hold both copies then you are probably short on disk space and the actual risk of writing a truncated file is much higher, so I would argue this is a very poor tradeoff outside of some exceedingly narrow (and well-monitored) edge cases.
Here's some code that implements the full-algorithm (windows code is untested and unfinished):
#!/usr/bin/env ruby
require 'tempfile'
def file_edit(filename, regexp, replacement)
tempdir = File.dirname(filename)
tempprefix = File.basename(filename)
tempprefix.prepend('.') unless RUBY_PLATFORM =~ /mswin|mingw|windows/
tempfile =
begin
Tempfile.new(tempprefix, tempdir)
rescue
Tempfile.new(tempprefix)
end
File.open(filename).each do |line|
tempfile.puts line.gsub(regexp, replacement)
end
tempfile.fdatasync unless RUBY_PLATFORM =~ /mswin|mingw|windows/
tempfile.close
unless RUBY_PLATFORM =~ /mswin|mingw|windows/
stat = File.stat(filename)
FileUtils.chown stat.uid, stat.gid, tempfile.path
FileUtils.chmod stat.mode, tempfile.path
else
# FIXME: apply perms on windows
end
FileUtils.mv tempfile.path, filename
end
file_edit('/tmp/foo', /foo/, "baz")
And here is a slightly tighter version that doesn't worry about every possible edge case (if you are on Unix and don't care about writing to /proc):
#!/usr/bin/env ruby
require 'tempfile'
def file_edit(filename, regexp, replacement)
Tempfile.open(".#{File.basename(filename)}", File.dirname(filename)) do |tempfile|
File.open(filename).each do |line|
tempfile.puts line.gsub(regexp, replacement)
end
tempfile.fdatasync
tempfile.close
stat = File.stat(filename)
FileUtils.chown stat.uid, stat.gid, tempfile.path
FileUtils.chmod stat.mode, tempfile.path
FileUtils.mv tempfile.path, filename
end
end
file_edit('/tmp/foo', /foo/, "baz")
The really simple use-case, for when you don't care about file system permissions (either you're not running as root, or you're running as root and the file is root owned):
#!/usr/bin/env ruby
require 'tempfile'
def file_edit(filename, regexp, replacement)
Tempfile.open(".#{File.basename(filename)}", File.dirname(filename)) do |tempfile|
File.open(filename).each do |line|
tempfile.puts line.gsub(regexp, replacement)
end
tempfile.close
FileUtils.mv tempfile.path, filename
end
end
file_edit('/tmp/foo', /foo/, "baz")
TL;DR: That should be used instead of the accepted answer at a minimum, in all cases, in order to ensure the update is atomic and concurrent readers will not see truncated files. As I mentioned above, creating the Tempfile in the same directory as the edited file is important here to avoid cross device mv operations being translated into cp operations if /tmp is mounted on a different device. Calling fdatasync is an added layer of paranoia, but it will incur a performance hit, so I omitted it from this example since it is not commonly practiced.
There isn't really a way to edit files in-place. What you usually do when you can get away with it (i.e. if the files are not too big) is, you read the file into memory (File.read), perform your substitutions on the read string (String#gsub) and then write the changed string back to the file (File.open, File#write).
If the files are big enough for that to be unfeasible, what you need to do, is read the file in chunks (if the pattern you want to replace won't span multiple lines then one chunk usually means one line - you can use File.foreach to read a file line by line), and for each chunk perform the substitution on it and append it to a temporary file. When you're done iterating over the source file, you close it and use FileUtils.mv to overwrite it with the temporary file.
Another approach is to use inplace editing inside Ruby (not from the command line):
#!/usr/bin/ruby
def inplace_edit(file, bak, &block)
old_stdout = $stdout
argf = ARGF.clone
argf.argv.replace [file]
argf.inplace_mode = bak
argf.each_line do |line|
yield line
end
argf.close
$stdout = old_stdout
end
inplace_edit 'test.txt', '.bak' do |line|
line = line.gsub(/search1/,"replace1")
line = line.gsub(/search2/,"replace2")
print line unless line.match(/something/)
end
If you don't want to create a backup then change '.bak' to ''.
This works for me:
filename = "foo"
text = File.read(filename)
content = text.gsub(/search_regexp/, "replacestring")
File.open(filename, "w") { |file| file << content }
Here's a solution for find/replace in all files of a given directory. Basically I took the answer provided by sepp2k and expanded it.
# First set the files to search/replace in
files = Dir.glob("/PATH/*")
# Then set the variables for find/replace
#original_string_or_regex = /REGEX/
#replacement_string = "STRING"
files.each do |file_name|
text = File.read(file_name)
replace = text.gsub!(#original_string_or_regex, #replacement_string)
File.open(file_name, "w") { |file| file.puts replace }
end
require 'trollop'
opts = Trollop::options do
opt :output, "Output file", :type => String
opt :input, "Input file", :type => String
opt :ss, "String to search", :type => String
opt :rs, "String to replace", :type => String
end
text = File.read(opts.input)
text.gsub!(opts.ss, opts.rs)
File.open(opts.output, 'w') { |f| f.write(text) }
If you need to do substitutions across line boundaries, then using ruby -pi -e won't work because the p processes one line at a time. Instead, I recommend the following, although it could fail with a multi-GB file:
ruby -e "file='translation.ja.yml'; IO.write(file, (IO.read(file).gsub(/\s+'$/, %q('))))"
The is looking for white space (potentially including new lines) following by a quote, in which case it gets rid of the whitespace. The %q(')is just a fancy way of quoting the quote character.
Here an alternative to the one liner from jim, this time in a script
ARGV[0..-3].each{|f| File.write(f, File.read(f).gsub(ARGV[-2],ARGV[-1]))}
Save it in a script, eg replace.rb
You start in on the command line with
replace.rb *.txt <string_to_replace> <replacement>
*.txt can be replaced with another selection or with some filenames or paths
broken down so that I can explain what's happening but still executable
# ARGV is an array of the arguments passed to the script.
ARGV[0..-3].each do |f| # enumerate the arguments of this script from the first to the last (-1) minus 2
File.write(f, # open the argument (= filename) for writing
File.read(f) # open the argument (= filename) for reading
.gsub(ARGV[-2],ARGV[-1])) # and replace all occurances of the beforelast with the last argument (string)
end
EDIT: if you want to use a regular expression use this instead
Obviously, this is only for handling relatively small text files, no Gigabyte monsters
ARGV[0..-3].each{|f| File.write(f, File.read(f).gsub(/#{ARGV[-2]}/,ARGV[-1]))}
I am using the tty-file gem
Apart from replacing, it includes append, prepend (on a given text/regex inside the file), diff, and others.

Resources