I'm trying to read every file in a specified directory. I'd like to ignore hidden files. I've found a way to do this, but I'm pretty sure it is the most inefficient way to do this.
This is what I've tried,
Find.find(directory) do |path|
file_paths << path if path =~ /.*\./ and !path.split("/")[-1].to_s.starts_with?(".")
end
This works. But I hate it.
I then tried to do this,
file_paths << path if path =~ /.*\./ and path =~ /^\./
But this returned nothing for me. What am I doing wrong here?
You could just use Dir
file_paths = Dir.glob("#{directory}/*")
Dir#glob Docs:
Returns the filenames found by expanding pattern which is an Array of the patterns or the pattern String, either as an array or as parameters to the block.
Note, this will not match Unix-like hidden files (dotfiles). In order to include those in the match results, you must use something like “{,.}”.
per #arco444 if you want this to search recursively
file_paths = Dir.glob("#{directory}/**/*")
If you wanted to ignore files starting with ., the below would append those that don't to the file_paths array
Find.find(directory) do |path|
if File.file?(path)
file_paths << path unless File.basename(path).start_with?(".")
end
end
Note that this will not necessarily ignore hidden files, for the reasons mentioned in the comments. It also currently includes "hidden" directories, i.e. a file such as /some/.hidden/directory/normal.file would be included in the list.
Related
I intend to pass to a Dir constructor a string representing a glob matching all directories such as /**/* so I can receive a array of Dirs representing the matched directories.
How can I take the paths of those Dirs as strings? Is that possible to be done without invoking Dir.chdir and without iterating over all the files contained in those directories?
EDIT: After reading the first answer I plan on testing this snippet, just to print the returned entity of glob() method:
def processRemappingConfig(configString)
configLineArray = line.split("=>").each{ |entry| entry.chomp!;}
if configLineArray[0].match(/(\*\*)+/) then
#TODO:HOW TO Expand dirname path and get list of paths
puts Dir.glob(configLineArray[0]);
end
end
Where configString will be /**/$currLogicSrcProjDirName=>/$currLogicSrcProjDirName
If you add a trailing slash to the glob pattern, you'll get back only directories, rather than directories and files:
directories = Dir.glob("/**/*/")
That will get you a simple array of strings with all the directory names.
Wider context: Case-insensitive filename on case sensitive file system
Given the path of a directory (as a string, might be relative to the current working dir or absolute), I'd like to open a specific file. I know the file's filename except for the its case. (It could be TASKDATA.XML, TaskData.xml or even tAsKdAtA.xMl.)
Inspired by the accepted answer to Open a file case-insensitively in Ruby under Linux, I've come up with this little module to produce a glob for matching the file's name:
module Utils
def self.case_insensitive_glob_string(string)
string.each_char.map do |c|
cased = c.upcase != c.downcase
cased ? "[#{c.upcase}#{c.downcase}]" : c
end.join
end
end
For my specific case, I'd call this with
Utils.case_insensitive_glob_string('taskdata.xml')
and would get
'[Tt][Aa][Ss][Kk][Dd][Aa][Tt][Aa].[Xx][Mm][Ll]'
Specific context: glob relative to a dir ≠ pwd
Now I have to expand the glob, i.e. match it against actual files in the given directory. Unfortunately, Dir.glob(...) doesn't seem have an argument to pass a directory('s path) relative to which the glob should be expanded. Intuitively, it would make sense to me to create a Dir object and have that handle the glob:
d = Dir.new(directory_path)
# => #<Dir:/the/directory>
filename = d.glob(Utils.case_insensitive_glob_string('taskdata.xml')).first() # I wish ...
# NoMethodError: undefined method `glob' for #<Dir:/the/directory>
... but glob only exists as a class method, not as an instance method. (Anybody know why that's true of so many of Dir's methods that would perfectly make sense relative to a specific directory?)
So it looks like I have two options:
Change the current working dir to the given directory
or
expand the filename's glob in combination with the directory path
The first option is easy: Use Dir.chdir. But because this is in a Gem, and I don't want to mess with the environment of the users of my Gem, I shy away from it. (It's probably somewhat better when used with the block synopsis than manually (or not) resetting the working dir when I'm done.)
The second option looks easy. Simply do
taskdata_xml_name_glob = Utils.case_insensitive_glob_string('taskdata.xml')
taskdata_xml_path_glob = File.join(directory_path, taskdata_xml_name_glob)
filename = Dir.glob(taskdata_xml_path_glob).first()
, right? Almost. When directory_path contains characters that have a special meaning in globs, they will wrongly be expanded, when I only want glob expansion on the filename. This is unlikely, but as the path is provided by the Gem user, I have to account for it, anyway.
Question
Should I escape directory_path before File.joining it with the filename glob? If so, is there a facility to do that or would I have to code the escaping function myself?
Or should I use a different approach (be it chdir, or something yet different)?
If I were implementing that behaviour, I would go with filtering an array, returned by Dir#entries:
Dir.entries("#{target}").select { |f| f =~ /\A#{filename}\z/i }
Please be aware that on unix platform both . and .. entries will be listed as well, but they are unlikely to be matched on the second step. Also, probably the filename should be escaped with Regexp.escape:
Dir.entries("#{target}").select { |f| f =~ /\A#{Regexp.escape(filename)}\z/i }
I want to remove the following characters from several files in a folder. What I have so far is this:
str.delete! '!##$%^&*()
which I think will work to remove the characters. What do I need to do to make it run through all the files in the folder?
You clarified your question, stating you want to remove certain characters from the contents of files in a directory. I created a straight forward way to traverse a directory (and optionally, subdirectories) and remove specified characters from the file contents. I used String#delete like you started with. If you want to remove more advanced patterns you might want to change it to String#gsub with regular expressions.
The example below will traverse a tmp directory (and all subdirectories) relative to the current working directory and remove all occurrences of !, $, and # inside the files found. You can of course also pass the absolute path, e.g., C:/some/dir. Notice I do not filter on files, I assume it's all text files in there. You can of course add a file extension check if you wish.
def replace_in_files(dir, chars, subdirs=true)
Dir[dir + '/*'].each do |file|
if File.directory?(file) # Traverse inner directories if subdirs == true
replace_in_files(file, chars, subdirs) if subdirs
else # Replace file contents
replaced = File.read(file).delete(chars)
File.write(file, replaced)
end
end
end
replace_in_files('tmp', '!$#')
I think this might work, although I'm a little shaky on the Dir class in Ruby.
Dir.foreach('/path/to/dir') do |file|
file.delete '!##$%^&*()
end
There's a more general version of your question here: Iterate through every file in one directory
Hopefully a more thorough answer will be forthcoming but maybe this'll get you where you need.
Dir.foreach('filepath') do |f|
next if Dir.exists?(f)
file = File.new("filepath/#{f}",'r+')
text = file.read.delete("'!##$%^&*()")
file.rewind
file.write(text)
file.close
end
The reason you can't do
file.write(file.read.delete("'!##$%^&*()"))
is that file.read leaves the "cursor" at the end of the text. Instead of writing over the file, you would be appending to the file, which isn't what you want.
You could also add a method to the File class that would move the cursor to the beginning of the file.
class File
def newRead
data = self.read
self.rewind
data
end
end
Dir.foreach('filepath') do |f|
next if Dir.exists?(f)
file = File.new("filepath/#{f}",'r+')
file.write(file.newRead.delete("'!##$%^&*()"))
file.close
end
I'm trying to crawl FTP and pull down all the files recursively.
Up until now I was trying to pull down a directory with
ftp.list.each do |entry|
if entry.split(/\s+/)[0][0, 1] == "d"
out[:dirs] << entry.split.last unless black_dirs.include? entry.split.last
else
out[:files] << entry.split.last unless black_files.include? entry.split.last
end
But turns out, if you split the list up until last space, filenames and directories with spaces are fetched wrong.
Need a little help on the logic here.
You can avoid recursion if you list all files at once
files = ftp.nlst('**/*.*')
Directories are not included in the list but the full ftp path is still available in the name.
EDIT
I'm assuming that each file name contains a dot and directory names don't. Thanks for mentioning #Niklas B.
There are a huge variety of FTP servers around.
We have clients who use some obscure proprietary, Windows-based servers and the file listing returned by them look completely different from Linux versions.
So what I ended up doing is for each file/directory entry I try changing directory into it and if this doesn't work - consider it a file :)
The following method is "bullet proof":
# Checks if the give file_name is actually a file.
def is_ftp_file?(ftp, file_name)
ftp.chdir(file_name)
ftp.chdir('..')
false
rescue
true
end
file_names = ftp.nlst.select {|fname| is_ftp_file?(ftp, fname)}
Works like a charm, but please note: if the FTP directory has tons of files in it - this method takes a while to traverse all of them.
You can also use a regular expression. I put one together. Please verify if it works for you as well as I don't know it your dir listing look different. You have to use Ruby 1.9 btw.
reg = /^(?<type>.{1})(?<mode>\S+)\s+(?<number>\d+)\s+(?<owner>\S+)\s+(?<group>\S+)\s+(?<size>\d+)\s+(?<mod_time>.{12})\s+(?<path>.+)$/
match = entry.match(reg)
You are able to access the elements by name then
match[:type] contains a 'd' if it's a directory, a space if it's a file.
All the other elements are there as well. Most importantly match[:path].
Assuming that the FTP server returns Unix-like file listings, the following code works. At least for me.
regex = /^d[r|w|x|-]+\s+[0-9]\s+\S+\s+\S+\s+\d+\s+\w+\s+\d+\s+[\d|:]+\s(.+)/
ftp.ls.each do |line|
if dir = line.match(regex)
puts dir[1]
end
end
dir[1] contains the name of the directory (given that the inspected line actually represents a directory).
As #Alex pointed out, using patterns in filenames for this is hardly reliable. Directories CAN have dots in their names (.ssh for example), and listings can be very different on different servers.
His method works, but as he himself points out, takes too long.
I prefer using the .size method from Net::FTP.
It returns the size of a file, or throws an error if the file is a directory.
def item_is_file? (item)
ftp = Net::FTP.new(host, username, password)
begin
if ftp.size(item).is_a? Numeric
true
end
rescue Net::FTPPermError
return false
end
end
I'll add my solution to the mix...
Using ftp.nlst('**/*.*') did not work for me... server doesn't seem to support that ** syntax.
The chdir trick with a rescue seems expensive and hackish.
Assuming that all files have at least one char, a single period, and then an extension, I did a simple recursion.
def list_all_files(ftp, folder)
entries = ftp.nlst(folder)
file_regex = /.+\.{1}.*/
files = entries.select{|e| e.match(file_regex)}
subfolders = entries.reject{|e| e.match(file_regex)}
subfolders.each do |subfolder|
files += list_all_files(ftp, subfolder)
end
files
end
nlst seems to return the full path to whatever it finds non-recursively... so each time you get a listing, separate the files from the folders, and then process any folder you find recrsively. Collect all the file results.
To call, you can pass a starting folder
files = list_all_files(ftp, "my_starting_folder/my_sub_folder")
files = list_all_files(ftp, ".")
files = list_all_files(ftp, "")
files = list_all_files(ftp, nil)
I'm trying to do a simple regex to grab specific text out of a bunch of text files in a directory. The code I'm using is below:
input_dir = File.join('path/to/file/dir/', "*.txt")
Dir.glob(input_dir) do |file|
if /\.txt$/i.match file
File.open(file, "r") do |_file|
/==BEGIN==(.*)==END==/.match _file.read
puts $1
end
end
end
That works for exactly 1 of the files in the directory, but all other files return nil. Am I missing something here?
Hard to guess with so little data, but could it be that in most files (except one), ==BEGIN== and ==END== are on different lines?
Does /==BEGIN==(.*)==END==/m.match _file.read change anything? The /m modifier allows the dot to also match newlines in Ruby.