Bash script. create .info.file.prt if file.prt exists. - bash

I have three files in a folder called /folder/files/$SET_DATE/ but may have much more depending on the date.
Ben.prt
.info.Ben.prt
Jim.prt
John.prt
I would like to create a .info.*.prt file for each .prt file in the folder, but if one already exists, I don't want to create two.
A ll-lart would then leave me with the following.
Server ben 10:30 <~> ll-lart
.info.Ben.prt
.info.Jim.prt
.info.John.prt
Ben.prt
Jim.prt
John.prt
The values in the .info.* foles would be the count of chars in the .prt files.
so I have the following.
SET_DATE= cat /tmp/date.txt
FILES="/folder/folder/folder/$DP_DATE/.info.*"
FILESF="/folder/folder/folder/$DP_DATE/"
FILESP="*.prt"
if [ ! -e $FILESF".info."$FILESP ]; then
echo 0 >> $FILESF.info.$FILESP
fi
Finding it hard to get my head around this now though.
Any kick in the right direction would be much appreicated.

for file in *.prt
do
[ -f ".info.$file" ] || wc -c < "$file" > ".info.$file"
done
For each .prt file name not starting with a . dot, if the corresponding .info file does not exist, create it with the number of characters found in the file.

Related

How do I write a script that loops through different folders and saves PATH outputs into a tab-delimited text file?

I am a newbie who is attempting to write a script that utilizes a text file containing names of different folders (subj1, subj2, subj3... etc), loops through each folder and goes into these folders to extract full paths of two files inside (i.e. /Users/desktop/subj1/animals/pig.jpg, /Users/desktop/subj1/animals/cow.jpg), then saves the subject ID and two files in 3 columns in a tab-delimited text file. (E.g. subj1 /Users/desktop/subj1/animals/pig.jpg /Users/desktop/subj1/animals/cow.jpg) so on and so forth.
The output would look like:
subj1 /Users/desktop/subj1/animals/pig.jpg /Users/desktop/subj1/animals/cow.jpg
subj2 /Users/desktop/subj2/animals/pig.jpg /Users/desktop/subj2/animals/cow.jpg
I have tried searching for answers but nothing really came close to answering this query.. I also need to do this for over 1000 folders so it would be insane to try to create a file by hand.
(edit): I would have to first verify if the files exist inside each folder. I created an array using a text file of folder names. Here is what I have thus far:
read -r -a array <<< ${subj_numbs}
array_error_file=()
for subj in "${array[#]}"
do
echo "Working on.." \"${subj}\"
pig_file=${dir}/${subj}/animals/pig.jpg
cow_file=${dir}/${subj}/animals/cow.jpg
if [ ! -f $pig_file ] && [ ! -f $cow_file ]; then
echo " [!] Files ${pig_file} and ${cow_file} do not exist."
array_error_file+=($pig_file)
array_error_file+=($cow_file)
else
echo "Writing path names to text file.. -> \"${subj}\"
pig_path="$pig_file"; pwd
cow_path="$cow_file"; pwd
Something like this ?
while read line; do
pig="$(cd "$(dirname "$line/animals/pig.jpg")"; pwd)/$(basename "$line/animals/pig.jpg")"
cow="$(cd "$(dirname "$line/animals/cow.jpg")"; pwd)/$(basename "$line/animals/cow.jpg")"
echo "$line \t $pig \t $cow" >> yourtabfile.txt
done < yourlist.txt
I'd be glad editing this to make it fitting your situation if it doesn't work, but that means you would have to provide more info.

grep files based on name prefixes

I have a question on how to approach a problem I've been trying to tackle at multiple points over the past month. The scenario is like so:
I have a a base directory with multiple sub-directories all following the same sub-directory format:
A/{B1,B2,B3} where all B* have a pipeline/results/ directory structure under them.
All of these results directories have multiple *.xyz files in them. These *.xyz files have a certain hierarchy based on their naming prefixes. The naming prefixes in turn depend on how far they've been processed. They could be, for example, select.xyz, select.copy.xyz, and select.copy.paste.xyz, where the operations are select, copy and paste. What I wish to do is write a ls | grep or a find that picks these files based on their processing levels.
EDIT:
The processing pipeline goes select -> copy -> paste. The "most processed" file would be the one with the most of those stages as prefixes in its filename. i.e. select.copy.paste.xyz is more processed than select.copy, which in turn is more processed than select.xyz
For example, let's say
B1/pipeline/results/ has select.xyz and select.copy.xyz,
B2/pipeline/results/ has select.xyz
B3/pipeline/results/ has select.xyz, select.copy.xyz, and select.copy.paste.xyz
How can I write a ls | grep/find that picks the most processed file from each subdirectory? This should give me B1/pipeline/results/select.copy.xyz, B2/pipeline/results/select.xyz and B3/pipeline/results/select.copy.paste.xyz.
Any pointer on how I can think about an approach would help. Thank you!
For this answer, we will ignore the upper part A/B{1,2,3} of the directory structure. All files in some .../pipeline/results/ directory will be considered, even if the directory is A/B1/doNotIncludeMe/forbidden/pipeline/results. We assume that the file extension xyz is constant.
A simple solution would be to loop over the directories and check whether the files exist from back to front. That is, check if select.copy.paste.xyz exists first. In case the file does not exist, check if select.copy.xyz exists and so on. A script for this could look like the following:
#! /bin/bash
# print paths of the most processed files
shopt -s globstar nullglob
for d in **/pipeline/result; do
if [ -f "$d/select.copy.paste.xyz" ]; then
echo "$d/select.copy.paste.xyz"
elif [ -f "$d/select.copy.xyz" ]; then
echo "$d/select.copy.xyz"
elif [ -f "$d/select.xyz" ]; then
echo "$d/select.xyz"
else
# there is no file at all
fi
done
It does the job, but is not very nice. We can do better!
#! /bin/bash
# print paths of the most processed files
shopt -s globstar nullglob
for dir in **/pipeline/result; do
for file in "$dir"/select{.copy{.paste,},}.xyz; do
[ -f "$file" ] && echo "$file" && break
done
done
The second script does exactly the same thing as the first one, but is easier to maintain, adapt, and so on. Both scripts work with file and directory names that contain spaces or even newlines.
In case you don't have whitespace in your paths, the following (hacky, but loop-free) script can also be used.
#! /bin/bash
# print paths of the most processed files
shopt -s globstar nullglob
files=(**/pipeline/result/select{.copy{.paste,},}.xyz)
printf '%s\n' "${files[#]}" | sed -r 's#(.*/)#\1 #' | sort -usk1,1 | tr -d ' '

Handling empty files when concatenating files in bash

I have a number (say, 100) of CSV files, out of which some (say, 20) are empty (i.e., 0 bytes file). I would like to concatenate the files into one single CSV file (say, assorted.csv), with the following requirement met:
For each empty file, there must be a blank line in assorted.csv.
It appears that simply doing cat *.csv >> assorted.csv skips the empty files completely in the sense that they do not have any lines and hence there is nothing to concatenate.
Though I can solve this problem using any high-level programming language, I would like to know if and how to make it possible using Bash.
Just make a loop and detect if the file is not empty. If it's empty, just echo the file name+comma in it: that will create a near blank line. Otherwise, prefix each line with the file name+comma.
#!/bin/bash
out=assorted.csv
# delete the file prior to doing concatenation
# or if ran twice it would be counted in the input files!
rm -f "$out"
for f in *.csv
do
if [ -s "$f" ] ; then
#cat "$f" | sed 's/^/$f,/' # cat+sed is too much here
sed "s/^/$f,/" "$f"
else
echo "$f,"
fi
done > $out

Catenate files with blank lines between them [duplicate]

This question already has answers here:
Concatenating Files And Insert New Line In Between Files
(8 answers)
Closed 7 years ago.
How can we copy all the contents of all the files in a given directory into a file so that there are two empty lines between contents of each files?
Need not to mention, I am new to bash scripting, and I know this is not an extra complicated code!
Any help will be greatly appreciated.
Related links are following:
* How do I compare the contents of all the files in a directory against another directory?
* Append contents of one file into another
* BASH: Copy all files and directories into another directory in the same parent directory
After reading comments, my initial attempt is this:
cat * > newfile.txt
But this does not create two empty lines between contents of each new files.
Try this.
awk 'FNR==1 && NR>1 { printf("\n\n") }1' * >newfile.txt
The variable FNR is the line number within the current file and NR is the line number overall.
One way:
(
files=(*)
cat "${files[0]}"
for (( i = 1; i < "${#files[#]}" ; ++i )) ; do
echo
echo
cat "${files[i]}"
done
) > newfile.txt
Example of file organization:
I have a directory ~/Pictures/Temp
If I wanted to move PNG's from that directory to another directory I would first want to set a variable for my file names:
# This could be other file types as well
file=$(find ~/Pictures/Temp/*.png)
Of course there are many ways to view this check out:
$ man find
$ man ls
Then I would want to set a directory variable (especially if this directory is going to be something like a date
dir=$(some defining command here perhaps an awk of an ls -lt)
# Then we want to check for that directories existence and make it if
# it doesn't exist
[[ -e $dir ]] || mkdir "$dir"
# [[ -d $dir ]] will work here as well
You could write a for loop:
# t is time with the sleep this will be in seconds
# super useful with an every minute crontab
for ((t=1; t<59; t++));
do
# see above
file=$(blah blah)
# do nothing if there is no file to move
[[ -z $file ]] || mv "$file" "$dir/$file"
sleep 1
done
Please Google this if any of it seems unclear here are some useful links:
https://www.gnu.org/software/gawk/manual/html_node/For-Statement.html
http://www.gnu.org/software/gawk/manual/gawk.html
Best link on this page is below:
http://mywiki.wooledge.org/BashFAQ/031
Edit:
Anyhow where I was going with that whole answer is that you could easily write a script that will organize certain files on your system for 60 sec and write a crontab to automatically do your organizing for you:
crontab -e
Here is an example
$ crontab -l
* * * * * ~/Applications/Startup/Desktop-Cleanup.sh
# where ~/Applications/Startup/Desktop-Cleanup.sh is a custom application that I wrote

Cannot change the names of files that are the result of a for loop that echos file names

I've been successfully running a script that prints out the names of the files in a specific directory by using
for f in data/*
do echo $f
and when I run the program it gives me:
data/data-1.txt
data/data-2.txt
data/data-3.txt (the files in the data directory)
however, when I need to change all of the file names from data-*.txt to mydata-*txt, I can't figure it out.
I keep trying to use sed s/data/mydata/g $f but it prints out the whole file instead and doesn't change the name correctly. Can anybody give me some tips on how to change the file names? it seems to also change the name of the directory if I use SED, so I'm kind of a dead end. Even using mv doesn't seem to do anything.
for f in data/*
do
NewName="$( echo "${f}" | sed 's#/data-\([0-9]*.txt\)$#mydata\1#' )"
if [ ! "${f}" = "${NewName}" ]
then
mv ${f} ${NewName}
fi
done
based on your code but lot of other way to do it (ex: find -exec)

Resources