ruby read and write/change the same file - ruby

I am trying to change the content of an existing file. I have this piece of code, which works. But I would like to find a better way to do the manipulation in one time of opening file.
File.open(file_name , 'r') do |f|
content = f.read
end
File.open(file_name , 'w') do |f|
content.insert(0, "something ")
f.write(content)
end
Is there a way we can do it only opening once the file?
I have tried using File.open(file_name , 'r+'), which seems only append to the end of the file (not be able to insert thing at the beginning of the file).

[Edit: I misunderstood your question, but my code below can be fixed by simply inserting the line:
text_to_prepend = ''
after
line_out = text_to_prepend + buf.shift
It could be simplified a little (for your question), but I'll leave it as is to show how the same string could be prepended to each line.]
You can open the file but once, and not read the entire file before writing, but it's messy and a bit tricky. Basically, you need to move the file pointer between reading and writing and maintain a buffer that contains lines from the file that will be wholly or partially overwritten when each modified line is written.
At each step, remove the first line from the buffer and modify it in preparation for writing. Before writing, however, you may need to read one or more additional lines into the buffer, in order that the read pointer remains ahead of the write pointer after the modified line is written. After all lines have been read, each remaining line in the buffer is modified and written.
Code
def prepend_file_lines(file_name, text_to_prepend)
f = File.open(file_name, 'r+')
return if f.eof?
write_pos = 0
line_in = f.readline
read_pos = line_in.size
buf = [line_in]
last_line_read = f.eof?
loop do
break if buf.empty?
line_out = text_to_prepend + buf.shift
while (!last_line_read && read_pos <= write_pos + line_out.size) do
line_in = f.readline
buf << line_in
read_pos += line_in.size
last_line_read = f.eof?
end
f.seek(write_pos, IO::SEEK_SET)
write_pos += f.write(line_out)
f.seek(read_pos, IO::SEEK_SET)
end
end
Example
First, create a test file.
text =<<_
Now is
the time
for all Rubiests
to raise their
glasses to Matz.
_
F_NAME = "sample.txt"
File.write(F_NAME, text)
We can confirm the file was written correctly:
File.readlines(F_NAME).each { |l| puts l }
# Now is
# the time
# for all Rubiests
# to raise their
# glasses to Matz.
Now let's try it:
prepend_file_lines("sample.txt", "Here's to Matz: ")
File.readlines(F_NAME).each { |l| puts l }
# Here's to Matz: Now is
# Here's to Matz: the time
# Here's to Matz: for all Rubiests
# Here's to Matz: to raise their
# Here's to Matz: glasses to Matz.
Note that when testing, it's necessary to write the test file before each call to prepend_file_lines, since the file is being modified.

It looks like you want IO::SEEK_SET with 0 to rewind the file pointer after reading.
file_name = "File.txt";
File.open(file_name , 'r+') do |f|
content = f.read
content.insert(0, "somehting else")
f.seek(0, IO::SEEK_SET)
f.write(content)
end

You can do it in the same file, but you'll likely overwrite the content of the file.
Each file operation sets the file's cursor to a different position, which is the position used for the latter operations. So if you read 8 bytes, you have to back your cursor 8 bytes earlier and write exactly 8 bytes to not overwrite anything, if you write fewer bytes, you'll keep unchanged bytes.
The Ruby File class is IO class, which is documented in http://www.ruby-doc.org/core-1.9.3/IO.html.
To open a file for read/write operations, use "r+" mode.

Related

How to delete several repeating triplets from a big text using Ruby?

class RNAtoAA
def self.rna_convert(rna)
rna.slice!"AUG"
end
end
I tried this to delete "AUG" (I also need to delete 2 more repeating patterns) but it did not produce the desired result. I also tried .gsub("AUG", "UAA")
string = 'AAAAUG'
RNAtoAA.rna_convert(string)
puts string
# AAA
Seems to work as expected
If you want to return the updated string, use this:
class RNAtoAA
def self.rna_convert(rna)
rna.slice!"AUG"
return rna
end
end
You need to read the file in chunks so that the memory isn't swamped.
This example creates a testfile and uses it as source for a filtered version.
If you were to create a gigabyte big testfile you need to do the same thing for the creation of the testfile.
If your file contains linefeeds this could be simpler by lazy loading the lines but I'm going to assume it hasn't.
# create a testfile
patterns = ["AUG", "UAA", "UAG", "UGA","AAA", "BBB", "CCC"]
large_string = ""
1_000_000.times{large_string << patterns.sample}
File.write("rna.dat", large_string)
#read the file, remove some patterns and write in a new file
filtered = ["AUG", "UAA", "UAG", "UGA"]
File.open("filtered.dat", "w") do |out_file|
File.open("rna.dat", "r") do |in_file|
while chunk = in_file.read(3)
# Read small chunks of 3 bytes to limit memory usage
out_file.write chunk unless filtered.include? chunk
end
end
end

How do I detect end of file in Ruby?

I wrote the following script to read a CSV file:
f = File.open("aFile.csv")
text = f.read
text.each_line do |line|
if (f.eof?)
puts "End of file reached"
else
line_num +=1
if(line_num < 6) then
puts "____SKIPPED LINE____"
next
end
end
arr = line.split(",")
puts "line number = #{line_num}"
end
This code runs fine if I take out the line:
if (f.eof?)
puts "End of file reached"
With this line in I get an exception.
I was wondering how I can detect the end of file in the code above.
Try this short example:
f = File.open(__FILE__)
text = f.read
p f.eof? # -> true
p text.class #-> String
With f.read you read the whole file into text and reach EOF.
(Remark: __FILE__ is the script file itself. You may use you csv-file).
In your code you use text.each_line. This executes each_line for the string text. It has no effect on f.
You could use File#each_line without using a variable text. The test for EOF is not necessary. each_line loops on each line and detects EOF on its own.
f = File.open(__FILE__)
line_num = 0
f.each_line do |line|
line_num +=1
if (line_num < 6)
puts "____SKIPPED LINE____"
next
end
arr = line.split(",")
puts "line number = #{line_num}"
end
f.close
You should close the file after reading it. To use blocks for this is more Ruby-like:
line_num = 0
File.open(__FILE__) do | f|
f.each_line do |line|
line_num +=1
if (line_num < 6)
puts "____SKIPPED LINE____"
next
end
arr = line.split(",")
puts "line number = #{line_num}"
end
end
One general remark: There is a CSV library in Ruby. Normally it is better to use that.
https://www.ruby-forum.com/topic/218093#946117 talks about this.
content = File.read("file.txt")
content = File.readlines("file.txt")
The above 'slurps' the entire file into memory.
File.foreach("file.txt") {|line| content << line}
You can also use IO#each_line. These last two options do not read the entire file into memory. The use of the block makes this automatically close your IO object as well. There are other ways as well, IO and File classes are pretty feature rich!
I refer to IO objects, as File is a subclass of IO. I tend to use IO when I don't really need the added methods from File class for the object.
In this way you don't need to deal with EOF, Ruby will for you.
Sometimes the best handling is not to, when you really don't need to.
Of course, Ruby has a method for this.
Without testing this, it seems you should perform a rescue rather than checking.
http://www.ruby-doc.org/core-2.0/EOFError.html
file = File.open("aFile.csv")
begin
loop do
some_line = file.readline
# some stuff
end
rescue EOFError
# You've reached the end. Handle it.
end

In ruby, file.readlines.each not faster than file.open.each_line, why?

Just to analyze my iis log (BONUS: happened to know that iislog is encoded in ASCII, errrr..)
Here's my ruby code
1.readlines
Dir.glob("*.log").each do |filename|
File.readlines(filename,:encoding => "ASCII").each do |line|
#comment line
if line[0] == '#'
next
else
line_content = line.downcase
#just care about first one
matched_keyword = keywords.select { |e| line_content.include? e }[0]
total_count += 1 if extensions.any? { |e| line_content.include? e }
hit_count[matched_keyword] += 1 unless matched_keyword.nil?
end
end
end
2.open
Dir.glob("*.log").each do |filename|
File.open(filename,:encoding => "ASCII").each_line do |line|
#comment line
if line[0] == '#'
next
else
line_content = line.downcase
#just care about first one
matched_keyword = keywords.select { |e| line_content.include? e }[0]
total_count += 1 if extensions.any? { |e| line_content.include? e }
hit_count[matched_keyword] += 1 unless matched_keyword.nil?
end
end
end
"readlines" read the whole file in mem, why "open" always a bit faster on the contrary??
I tested it a couple of times on Win7 Ruby1.9.3
Both readlines and open.each_line read the file only once. And Ruby will do buffering on IO objects, so it will read a block (e.g. 64KB) data from disk every time to minimize the cost on disk read. There should be little time consuming difference in the disk read step.
When you call readlines, Ruby constructs an empty array [] and repeatedly reads a line of file contents and pushes it to the array. And at last it will return the array containing all lines of the file.
When you call each_line, Ruby reads a line of file contents and yield it to your logic. When you finished processing this line, ruby reads another line. It repeatedly reads lines until there is no more contents in the file.
The difference between the two method is that readlines have to append the lines to an array. When the file is large, Ruby might have to duplicate the underlying array (C level) to enlarge its size one or more times.
Digging into the source, readlines is implemented by io_s_readlines which calls rb_io_readlines. rb_io_readlines calls rb_io_getline_1 to fetch line and rb_ary_push to push result into the returning array.
each_line is implemented by rb_io_each_line which calls rb_io_getline_1 to fetch line just like readlines and yield the line to your logic with rb_yield.
So, there is no need to store line results in a growing array for each_line, no array resizing, copying issue.

Read, edit, and write a text file line-wise using Ruby

Is there a good way to read, edit, and write files in place in Ruby?
In my online search I've found stuff suggesting to read it all into an array, modify said array, then write everything out. I feel like there should be a better solution, especially if I'm dealing with a very big file.
Something like:
myfile = File.open("path/to/file.txt", "r+")
myfile.each do |line|
myfile.replace_puts('blah') if line =~ /myregex/
end
myfile.close
Where replace_puts would write over the current line, rather than (over)writing the next line as it currently does because the pointer is at the end of the line (after the separator).
So then every line that matches /myregex/ will be replaced with 'blah'. Obviously what I have in mind is a bit more involved than that, as far as processing, and would be done in one line, but the idea is the same - I want to read a file line by line, and edit certain lines, and write out when I'm done.
Maybe there's a way to just say "rewind back to just after the last separator"? Or some way of using each_with_index and write via a line index number? I couldn't find anything of the sort, though.
The best solution I have so far is to read things line-wise, write them out to a new (temp) file line-wise (possibly edited), then overwrite the old file with the new temp file and delete. Again, I feel like there should be a better way - I don't think I should have to create a new 1gig file just to edit some lines in an existing 1GB file.
In general, there's no way to make arbitrary edits in the middle of a file. It's not a deficiency of Ruby. It's a limitation of the file system: Most file systems make it easy and efficient to grow or shrink the file at the end, but not at the beginning or in the middle. So you won't be able to rewrite a line in place unless its size stays the same.
There are two general models for modifying a bunch of lines. If the file is not too large, just read it all into memory, modify it, and write it back out. For example, adding "Kilroy was here" to the beginning of every line of a file:
path = '/tmp/foo'
lines = IO.readlines(path).map do |line|
'Kilroy was here ' + line
end
File.open(path, 'w') do |file|
file.puts lines
end
Although simple, this technique has a danger: If the program is interrupted while writing the file, you'll lose part or all of it. It also needs to use memory to hold the entire file. If either of these is a concern, then you may prefer the next technique.
You can, as you note, write to a temporary file. When done, rename the temporary file so that it replaces the input file:
require 'tempfile'
require 'fileutils'
path = '/tmp/foo'
temp_file = Tempfile.new('foo')
begin
File.open(path, 'r') do |file|
file.each_line do |line|
temp_file.puts 'Kilroy was here ' + line
end
end
temp_file.close
FileUtils.mv(temp_file.path, path)
ensure
temp_file.close
temp_file.unlink
end
Since the rename (FileUtils.mv) is atomic, the rewritten input file will pop into existence all at once. If the program is interrupted, either the file will have been rewritten, or it will not. There's no possibility of it being partially rewritten.
The ensure clause is not strictly necessary: The file will be deleted when the Tempfile instance is garbage collected. However, that could take a while. The ensure block makes sure that the tempfile gets cleaned up right away, without having to wait for it to be garbage collected.
If you want to overwrite a file line by line, you'll have to ensure the new line has the same length as the original line. If the new line is longer, part of it will be written over the next line. If the new line is shorter, the remainder of the old line just stays where it is.
The tempfile solution is really much safer. But if you're willing to take a risk:
File.open('test.txt', 'r+') do |f|
old_pos = 0
f.each do |line|
f.pos = old_pos # this is the 'rewind'
f.print line.gsub('2010', '2011')
old_pos = f.pos
end
end
If the line size does change, this is a possibility:
File.open('test.txt', 'r+') do |f|
out = ""
f.each do |line|
out << line.gsub(/myregex/, 'blah')
end
f.pos = 0
f.print out
f.truncate(f.pos)
end
Just in case you are using Rails or Facets, or you otherwise depend on Rails' ActiveSupport, you can use the atomic_write extension to File:
File.atomic_write('path/file') do |file|
file.write('your content')
end
Behind the scenes, this will create a temporary file which it will later move to the desired path, taking care of closing the file for you.
It further clones the file permissions of the existing file or, if there isn't one, of the current directory.
You can write in the middle of a file but you have to be carefull to keep the length of the string you overwrite the same otherwise you overwrite some of the following text. I give an example here using File.seek, IO::SEEK_CUR gives he current position of the file pointer, at the end of the line that is just read, the +1 is for the CR character at the end of the line.
look_for = "bbb"
replace_with = "xxxxx"
File.open(DATA, 'r+') do |file|
file.each_line do |line|
if (line[look_for])
file.seek(-(line.length + 1), IO::SEEK_CUR)
file.write line.gsub(look_for, replace_with)
end
end
end
__END__
aaabbb
bbbcccddd
dddeee
eee
After executed, at the end of the script you now have the following, not what you had in mind I assume.
aaaxxxxx
bcccddd
dddeee
eee
Taking that in consideration, the speed using this technique is much better than the classic 'read and write to a new file' method.
See these benchmarks on a file with music data of 1.7 GB big.
For the classic approach I used the technique of Wayne.
The benchmark is done withe the .bmbm method so that caching of the file doesn't play a very big deal. Tests are done with MRI Ruby 2.3.0 on Windows 7.
The strings were effectively replaced, I checked both methods.
require 'benchmark'
require 'tempfile'
require 'fileutils'
look_for = "Melissa Etheridge"
replace_with = "Malissa Etheridge"
very_big_file = 'D:\Documents\muziekinfo\all.txt'.gsub('\\','/')
def replace_with file_path, look_for, replace_with
File.open(file_path, 'r+') do |file|
file.each_line do |line|
if (line[look_for])
file.seek(-(line.length + 1), IO::SEEK_CUR)
file.write line.gsub(look_for, replace_with)
end
end
end
end
def replace_with_classic path, look_for, replace_with
temp_file = Tempfile.new('foo')
File.foreach(path) do |line|
if (line[look_for])
temp_file.write line.gsub(look_for, replace_with)
else
temp_file.write line
end
end
temp_file.close
FileUtils.mv(temp_file.path, path)
ensure
temp_file.close
temp_file.unlink
end
Benchmark.bmbm do |x|
x.report("adapt ") { 1.times {replace_with very_big_file, look_for, replace_with}}
x.report("restore ") { 1.times {replace_with very_big_file, replace_with, look_for}}
x.report("classic adapt ") { 1.times {replace_with_classic very_big_file, look_for, replace_with}}
x.report("classic restore") { 1.times {replace_with_classic very_big_file, replace_with, look_for}}
end
Which gave
Rehearsal ---------------------------------------------------
adapt 6.989000 0.811000 7.800000 ( 7.800598)
restore 7.192000 0.562000 7.754000 ( 7.774481)
classic adapt 14.320000 9.438000 23.758000 ( 32.507433)
classic restore 14.259000 9.469000 23.728000 ( 34.128093)
----------------------------------------- total: 63.040000sec
user system total real
adapt 7.114000 0.718000 7.832000 ( 8.639864)
restore 6.942000 0.858000 7.800000 ( 8.117839)
classic adapt 14.430000 9.485000 23.915000 ( 32.195298)
classic restore 14.695000 9.360000 24.055000 ( 33.709054)
So the in_file replacement was 4 times faster.

Ruby: Deleting last iterated item?

What I'm doing is this: have one file as input, another as output. I chose a random line in the input, put it in the output, and then delete it.
Now, I've iterated over the file and am on the line I want. I've copied it to the output file. Is there a way to delete it? I'm doing something like this:
for i in 0..number_of_lines_to_remove
line = rand(lines_in_file-2) + 1 #not removing the first line
counter = 0
IO.foreach("input.csv", "r") { |current_line|
if counter == line
File.open("output.csv", "a") { |output|
output.write(current_line)
}
end
counter += 1
}
end
So, I have current_line, but I'm not sure how to remove it from the source file.
Array.delete_at might do. Given an index, it removes the object at that index, returning the object.
input.csv:
one,1
two,2
three,3
Program:
#!/usr/bin/ruby1.8
lines = File.readlines('/tmp/input.csv')
File.open('/tmp/output.csv', 'a') do |file|
file.write(lines.delete_at(rand(lines.size)))
end
p lines # ["two,2\n", "three,3\n"]
output.csv:
one,1
Here is a randomline class. You create a new randomline object by passing it an input file name and an output file name. You can then call the deleterandom method on that object and pass it a number of lines to delete.
The data is stored internally in arrays as well as being put to file. Currently output is in append mode so if you use the same file it will just add to the end, you could change the a to a w if you wanted to start the file fresh each time.
class Randomline
attr_accessor :inputarray, :outputarray
def initialize(filein, fileout)
#filename = filein
#filein = File.open(filein,"r+")
#fileoutput = File.open(fileout,"a")
#inputarray = []
#outputarray = []
readin()
end
def readin()
#filein.each do |line|
#inputarray << line
end
end
def deleterandom(numtodelete)
numtodelete.times do |num|
random = rand(#inputarray.size)
#outputarray << inputarray[random]
#fileoutput.puts inputarray[random]
#inputarray.delete_at(random)
end
#filein = File.open(#filename,"w")
#inputarray.each do |line|
#filein.puts line
end
end
end
here is an example of it being used
a = Randomline.new("testin.csv","testout.csv")
a.deleterandom(3)
You have to re-write the source-file after removing a line otherwise the modifications won't stick as they're performed on a copy of the data.
Keep in mind that any operation which modifies a file in-place runs the risk of truncating the file if there's an error of any sort and the operation cannot complete.
It would be safer to use some kind of simple database for this kind of thing as libraries like SQLite and BDB have methods for ensuring data integrity, but if that's not an option, you just need to be careful when writing the new input file.

Resources