Ruby String mismatch in while comparing text from a text file - ruby

I am in a problem while reading a text file with readline and trying to compare first line with a string. I want to compare the first line of the text file with a string and then will go for next process. But I can't do that. Here is my code:
doc = File.open("example.txt", "r")
line1 = doc.readline
if line1 == "sukanta"
line2 = doc.readline
line3 = doc.readline
line4 = doc.readline
end
My example.txt file contains:
sukanta
Software engineer
label2
server:107.108.9.190
Please give me solution. While I am trying to get string length with line1.length it's not showing the exact number.
i got the answer. Its silly mistake .. i should use "sukanta\n" to compare
When i am using readline to read each line then i have to set each line in their place sequentially. i cant break the order. Whil i am using loop like
doc = File.open("example.txt", "r")
doc.each_line do |lines|
puts lines
end
getting the whole text as a line. cant separate each line from others. i need to break the order. How to do that?

I suspect you are not taking into account that a line ends with $/ ("\n" on UNIX). So you probably intended
line1 == "sukanta\n"
or
line1.chomp == "sukanta"
and you are not including $/ when you count the length (which is one or two characters less than the correct length depending on the OS).

Related

How to read the second line in a document.txt and then make a loop that reads line +1 in this document

My bot reads emails one by one from a document.txt file and after login with this email the bot outputs the comments that I have in another file.
I have reached the point that the bot reads the emails but I want that a specific account makes a specific and not a repeated comment.
So I have in mind the solution of reading a specific line from the comments file.
For example account 1 reads and puts line 1 of the comments file. I want to know how can I read the second line from a comments file.
This is the code part when I read comments one by one but I want to read for example line two or three!
file = 'comments.txt'
File.readlines(file).each do |line|
comment = ["#{line}"]
comment.each { |val|
comment = ["#{val}"]
}
end
File.readlines returns array. So you can do everything you want
lines = []
File.readlines(path_to_file, chomp: true).each.with_index(1) do |line, line_number|
lines << (line_number == 2 ? 'Special line' : line)
end
Try the below.
# set the line number to read
line_number = 2 # <== Reading 2nd line
comment = IO.readlines('comments.txt')[line_number-1]
Your code is overwriting the comment variable in each iteration.
I'd write your code like this:
lines = File.readlines('comments.txt')
lines.each do |line|
# entire line
end
In the loop you can do a lot of things with the single line, unfortunately I don't get 100% what you want to do (one comment vs. multiple, always the same for specific users, etc.) I hope this helps anyway.

Find strings and remove those lines

I'm trying to read all the text files from a directory, iterate through every file, and search for strings in the file and delete those lines. E.g.,
sample.txt
#Wrote for the configuration ideas
mag = some , db\m09oi, id\polki
jio = red\po9i8
[\]
#mag = denk
#jio = tea
I want to delete the lines having mag.
Output
#Wrote for the configuration ideas
jio = red\po9i8
[\]
#jio = tea
I tried:
Dir.glob("D:\\my_folder\\*.txt") do |file_name|
value = File.read(file_name)
change = value.gsub!(/[#m]ag/, "")
File.open(file_name, "w") { |file| file.puts change }
end
But the lines aren't removed.
Any suggestions please.
It is better to read the file line by line and if a line contains mag, just omit it, and only write other lines to the new file.
— Credits to #WiktorStribiżew
File.write(file_name, File.readlines(file_name).reject do |line|
line[/\bmag\b/]
end.join($/))
Here we read the file, split by lines with IO#readlines, reject lines having mag as a single word inside ("magistrate" won’t be matched,) join it back with the line delimiter, specified for this particular platform (e.g. \n on unix) and write it back.

How to read a file's content and search for a string in multiple files

I have a text file that has around 100 plus entries like out.txt:
domain\1esrt
domain\2345p
yrtfj
tkpdp
....
....
I have to read out.txt, line-by-line and check whether the strings like "domain\1esrt" are present in any of the files under a different directory. If present delete only that string occurrence and save the file.
I know how to read a file line-by-line and also know how to grep for a string in multiple files in a directory but I'm not sure how to join those two to achieve my above requirement.
You can create an array with all the words or strings you want to find and then delete/replace:
strings_to_delete = ['aaa', 'domain\1esrt', 'delete_me']
Then to read the file and use map to create an array with all the lines who doesn't match with none of the elements in the array created before:
# read the file 'text.txt'
lines = File.open('text.txt', 'r').map do|line|
# unless the line matches with some value on the strings_to_delete array
line unless strings_to_delete.any? do |word|
word == line.strip
end
# then remove the nil elements
end.reject(&:nil?)
And then open the file again but this time to write on it, all the lines which didn't match with the values in the strings_to_delete array:
File.open('text.txt', 'w') do |line|
lines.each do |element|
line.write element
end
end
The txt file looks like:
aaa
domain\1esrt
domain\2345p
yrtfj
tkpdp
....
....
delete_me
I don't know how it'll work with a bigger file, anyways, I hope it helps.
I would suggest using gsub here. It will run a regex search on the string and replace it with the second parameter. So if you only have to replace any single string, I believe you can simply run gsub on that string (including the newline) and replace it with an empty string:
new_file_text = text.gsub(/regex_string\n/, "")

Ruby: How do you search for a substring, and increment a value within it?

I am trying to change a file by finding this string:
<aspect name=\"lineNumber\"><![CDATA[{CLONEINCR}]]>
and replacing {CLONEINCR} with an incrementing number. Here's what I have so far:
file = File.open('input3400.txt' , 'rb')
contents = file.read.lines.to_a
contents.each_index do |i|contents.join["<aspect name=\"lineNumber\"><![CDATA[{CLONEINCR}]]></aspect>"] = "<aspect name=\"lineNumber\"><![CDATA[#{i}]]></aspect>" end
file.close
But this seems to go on forever - do I have an infinite loop somewhere?
Note: my text file is 533,952 lines long.
You are repeatedly concatenating all the elements of contents, making a substitution, and throwing away the result. This is happening once for each line, so no wonder it is taking a long time.
The easiest solution would be to read the entire file into a single string and use gsub on that to modify the contents. In your example you are inserting the (zero-based) file line numbers into the CDATA. I suspect this is a mistake.
This code replaces all occurrences of <![CDATA[{CLONEINCR}]]> with <![CDATA[1]]>, <![CDATA[2]]> etc. with the number incrementing for each matching CDATA found. The modified file is sent to STDOUT. Hopefully that is what you need.
File.open('input3400.txt' , 'r') do |f|
i = 0
contents = f.read.gsub('<![CDATA[{CLONEINCR}]]>') { |m|
m.sub('{CLONEINCR}', (i += 1).to_s)
}
puts contents
end
If what you want is to replace CLONEINCR with the line number, which is what your above code looks like it's trying to do, then this will work. Otherwise see Borodin's answer.
output = File.readlines('input3400.txt').map.with_index do |line, i|
line.gsub "<aspect name=\"lineNumber\"><![CDATA[{CLONEINCR}]]></aspect>",
"<aspect name=\"lineNumber\"><![CDATA[#{i}]]></aspect>"
end
File.write('input3400.txt', output.join(''))
Also, you should be aware that when you read the lines into contents, you are creating a String distinct from the file. You can't operate on the file directly. Instead you have to create a new String that contains what you want and then overwrite the original file.

Using Ruby to find the first previous occurrence of a string

I'm creating some basic work assistance utilities using Ruby. I've hit a problem that I don't really need to solve, but curiosity has the best of me.
What I would like to be able to do is search the contents of a file, starting from a particular line and find the first PREVIOUS occurrence of a string.
For example, if I have the following text saved in a file, I would like to be able to search for "CREATE PROCEDURE" starting at line 4 and have this return/output "CREATE PROCEDURE sp_MERGE_TABLE"
CREATE PROCEDURE sp_MERGE_TABLE
AS
SOME HORRIBLE STATEMENT
HERE
CREATE PROCEDURE sp_SOMETHING_ELSE
AS
A DIFFERENT STATEMENT
HERE
Searching for content isn't a challenge, but specifying a starting line - no idea. And then searching backwards... well...
Any help at all appreciated!
TIA!
I think you have to read file line one by line
then follwing will work
flag=true
if flag && line.include?("CREATE PROCEDURE")
puts line
flag=false
end
If performance isn't a big issue, you could just use a simple loop:
# pseudocode
line_no = 0
while line_no < start_line
read line from file
if content_found in this line
last_seen = line_no # or file offset
end
line_no += 1
end
return last_seen
I'm afraid you will have to work line by line through the file, unless you have some index over it, pointing to the beginnings of the lines. That would make the loop a little bit simpler but working through the file in backwards manner is harder (unless you keep the whole file in memory).
Edit:
I just had a much better idea, but I'm going to include the old solution anyway.
The benefit of searching backwards means you only have to read the first chunk of the file, upto the specified line number. For proximity, you get closer and closer to the start_line, and if you find a match you just forget the old one.. You still read in some redundant data at the beginning, but at least it's O(n)
path = "path/to/file"
start_line = 20
search_string = "findme!"
#assuming file is at least start_line lines long
match_index = nil
f = File.new(path)
start_line.times do |i|
line = f.readline
match_index = i if line.include? search_string
end
puts "Matched #{search_string} on line #{match_index}"
Of course, bear in mind that the size of this file plays an important role in answering your question.
If you wanted to get really serious, you could look into the IO class - it seems like this might be the ultimate solution. Untested, just a thought.
f = File.new(path)
start_line.downto(0) do |i|
f.lineno = i
break if f.gets.include?(search_string)
end
Original:
For an exhaustive solution, you could try something like the following. The downside is you'd need to read the whole file into memory, but it takes into account continuing from the bottom-up if it gets to the top without a match. Untested.
path = "path/to/file"
start_line = 20
search_string = "findme!"
#get lines of the file into an array (chomp optional)
lines = File.readlines(path).map(&:chomp)
#"cut" the deck, as with playing cards, so start_line is first in the array
lines = lines.slice!(start_line..lines.length) + lines
#searching backwards can just be searching a reversed array forwards
lines.reverse!
#search through the reversed-array, for the first occurence
reverse_occurence = nil
lines.each_with_index do |line,index|
if line.include?(search_string)
reverse_occurence = index
break
end
end
#reverse_occurence is now either "nil" for no match, or a reversed-index
#also un-cut the array when calculating the index
if reverse_occurence
occurence = lines.size - reverse_occurence - 1 + start_line
line = lines[reverse_occurence]
puts "Matched #{search_string} on line #{occurence}"
puts line
end
1) Read the entire file into a string.
2) Reverse the file-data string.
3) Reverse the search string.
4) Search forward. Remember to match end-of-line instead of beginning-of-line, and to start from position end-minus-N rather than from N.
Not very fast or efficient, but it's elegant. Or at least clever.

Resources