Given this directory structure:
├── script
│ ├── search.rb
│ └── searchable.txt
└── unsearchable.txt
You can only search file under the script (e.g. searchable.txt). But how do I read unsearchable.txt in Ruby?
(I got this error No such file or directory # rb_sysopen - <filename>.txt)
Just one level up to your current file.
file = File.new('../unsearchable.txt')
Or
file = File.join(File.dirname(__FILE__), '..', 'unsearchable.txt')
__FILE__ is your current file name. .. is the parent directory.
Related
I want to embed a file placed one level above the golang file code.
for example:
dir1
file.go
dir2
file.txt
How to embed file.txt inside file.go using go:embed?
The documentation states:
Patterns may not contain ‘.’ or ‘..’ or empty path elements, nor may they begin or end with a slash.
So what you are trying to do is not supported directly. Further information is available in the comments on this issue.
One thing you can do is to put a go file in dir2, embed file.txt in that and then import/use that in dir1/file.go (assuming the folders are in the same package).
This is not supported in the embed package as stated by #Brits (https://pkg.go.dev/embed)
A pattern I like to use is to create an resources.go file in my project's internal package and put all my embedded resources in there eg:
├── cmd\
│ └── cool.go
└── internal\
└── resources\
├── resources.go
├── fonts\
│ └── coolfont.ttf
└── icons\
└── coolicon.ico
resources.go
import _ "embed"
//go:embed fonts/coolfont.fs
var fonts byte[] // embed single file
//go:embed icons/*
var icons embed.FS // embed whole directory
There are libraries that can help with this as well such as those listed here https://github.com/avelino/awesome-go#resource-embedding
But I've not run into a use case where plain old embed wasn't enough for my needs.
I have multiple directories that in turn contain subdirectories. Example:
company_a/raw/2020/12
The value of the first directory (company_a in the sample above) is variable, but always with a pattern "word_letter"
The value of the second directory raw is immutable
The values of the last two directories (/2020/12 in the sample above) are variable.
My purpose is to extract the size of each leaf subdirectory (given the sample path above, the leaf subdir would be 12/) using a for loop.
Is there some kind of reverse basename utility which would allow me to list the entire path, using company_x/ dir as the root dir? Because if I want to extract directories' size, first I need to figure out how to list the last directories in the path.
A sample tree for reference:
$ tree company_b
tree company_b
└── raw
└── 2020
├── 05
│ └── data.raw
├── 06
│ └── data.raw
├── 07
│ └── data.raw
└── 08
└── data.raw
6 directories, 4 files
The du command does this very well using wildcards.
du -h */raw/*/*
Output:
80K company_b/raw/2021/02
80K company_b/raw/2021/05
80K company_b/raw/2021/04
80K company_b/raw/2021/01
80K company_b/raw/2021/03
I have a couple of hundred files in one folder, and I'd like to randomly move them to a number of different folders with a bash script - however, I'd like to fill each of those destination folders only up to a given capacity.
I'm thinking the right way to approach this is to assign two arrays, one containing all destination folders and one containing all files. Then I can randomly take a file from the filesarr and place it in a destination folder. My question is, how can I limit the number of files placed in each destination folder? So say I'm looking for ten files per destination folder - how can I move the first ten files from filesarr to the first folder in foldersarr, then move the next ten to the second folder in foldersarr, until all files have been moved? I know I should probably use a counter here, but my current attempt (below) is not doing the trick.
filesarr=(/Path/to/files/*) # this is the array of files to shuffle
foldersarr=(/Path/to/destination/folders/) # array of folders to move into
foldercount=0 # set it to 0
for afolder in "${foldersarr[#]}"; do
if [[ "$foldercount" -gt 10 ]]; then
echo "$foldercount files in folder, exiting and moving to next folder"
exit 1
else
for afile in "${filesarr[#]}"; do # do loop length(array) times; once for each file
length=${#filesarr[#]}
randomi=$(( $RANDOM % $length )) # select a random index
filename=${filesarr[$randomi]}
mv ${filename} ${foldersarr[#]}
echo "moving '$filename'"
foldercount=$((foldercount+1))
unset -v "filesarr[$randomi]" # unset after moved
array=("${filesarr[#]}") # remove NULL elements introduced by unset; copy array
done
fi
done
My current directory structure consists of all the files in a "holding" directory, and all the destination folders where I'd like to move them in a separate folder.
rootfolder
│
├── holding
│ ├── dywd.pdf
│ ├── ... (approx. 200 files)
│ └── kjfwekfjnwe.pdf
│
└── destinations
├── folder01
├── ...
└── folder10
I'd like to end up with this:
rootfolder
│
├── holding
│
└── destinations
├── folder01
│ ├── lwkejdwe.pdf
│ ├── ...
│ └── (ten files in this folder)
├── ...
│
└── folderXX
├── qwuoe.pdf
├── ...
└── (ten files in this folder)
something like this, (not tested)
dirs=(..) # array of dirs
dir_length=${#dirs[#]}
find -maxdepth 1 -type f | # or any other list of files
shuf |
while c=0 IFS= -r file;
do mv "$file" "{dirs[c++%$dir_length]}";
done
this will round robin moving files to target directories. The randomness is generated with shuf, no need to maintain the list of files separately.
You could create some "buckets" variables and fill each one with the same amount of file names, eg divide all your files in scope into these buckets. Then when done, write each bucket into a separate folder
Bash Script:
Each file inside table directories will need be renamed from keyspace to newkyespace_456 when it is copied to destination.
└── Main_folder
├── keyspace
│ ├── tableA-12323/keyspace-tableA-12323-ka-1-Data.db
│ ├── tableB-123425/keyspace-tableA-123425-ka-1-Data.db
│ └── tableC-12342/keyspace-tableA-12342-ka-1-Data.db
└── newkeyspace_456 ( given folder) and sub folders
├── tableA-12523
├── tableB-173425
└── tableC-1242
Example is
keyspace/tableA-12323/keyspace-tableA-12323-ka-1-Data.db
to
newkeyspace_456/tableA-12523/newkeyspace_456-tableA-12523-ka-1-Data.db
Note that same table (Type A , B , C) type can be copied to same table type in other keyspaces (Type A , B , C) . The table name also need changes in file name , please note in example 12323 has been renamed to 12523 when copied to diretory newkeyspace_456/tableA-12523.
Type A table files can be copied from keyspace/tableA-12323 to Type A table files in newkeyspace_456/tableA-12523.
How do I approach this problem?
Thanks
tom
Use parameter expansion with string substitution for changing filename, like this:
for fn in $(find ./keyspace -path '*.db') ; do cp "$fn" "${fn//keyspace/newkeyspace_456}" ; done ;
I've been using ARGV to open files but i feel like its clunky i want to have them in a different folder.
i want to open input.txt within my talk_parser.rb, I don't want to hardcode the file name either.
My directory (pwd is bin)
├── bin
│ └── talk_parser.rb
└── data
└── input.txt
tried
x = Dir.glob('../data/*.txt').to_s
file = File.open(File.expand_path(x))
but i get this error
talk_parser.rb:34:in `initialize':
No such file or directory # rb_sysopen - /home/huvi/Desktop/test/bin/["../data/input.txt"] (Errno::ENOENT)
from talk_parser.rb:34:in `open'
from talk_parser.rb:34:in `<main>'
not sure what to do
Dir.glob returns an Array.
You can get the first element and open it:
path = Dir.glob('../data/*.txt').first
file = File.open(path)