bash move is failing - bash

I am running below commands in a script
move_jobs() {
cd $JOB_DIR
for i in `cat $JOBS_FILE`
do
if [ `ls | grep -i ^${i}- | wc -l` -gt 0 ]; then
cd $i
if [ ! -d jobs ]; then
mkdir jobs && cd .. && mv "${i}"-* "${i}"/jobs/
else
cd .. && mv "${i}"-* "${i}"/jobs/
fi
error_handler $?
fi
done
}
but it failing as
mv: cannot stat `folder-*': No such file or directory
Not sure why move command is failing with regular expression

Your script is overly complicated and has several issues, one of which will be the problem, I guess it's the ls | grep ... part, but to find that out, you should include some debug logging.
for i in $(cat ...) loops through words, not lines.
Do not parse ls
And if you still do, do not ever grep for filenames but include it in your ls call: ls "${i}"-* | wc -l.
You do not need to check if a folder exists when the only thing that is different then is that you create it. You can use mkdir -p instead.
Jumping around folders in your script makes it almost unreadable, as you need to keep track of all cd commands when reading your script.
You could simply write the following, which I think will do what you want:
xargs -a "$JOBS_FILE" -I{} \
sh -c "
mkdir -p '$JOB_DIR/{}/jobs';
mv '$JOB_DIR/{}-'* '$JOB_DIR/{}/jobs';
"
or if you need more control:
while IFS= read -r jid; do
if ls "$JOB_DIR/$jid-"* &>/dev/null; then
TARGET_DIR="$JOB_DIR/$jid/jobs"
mkdir -p "$TARGET_DIR"
mv "$JOB_DIR/$jid-"* "$TARGET_DIR"
echo "OK"
else
echo "No files to move."
fi
done < "$JOBS_FILE"

Related

making file list and running commands on same file in directory

I have 50 files in a directory with .ar extension.
I have an idea of making a list with these file names, read each file, go back to the directory and run the following 2 commands on each file.
$i is the filename.ar
paz -r -L -e clean $i
psrplot -pF -j CDTp -j 'C max' -N2,1 -D $i.ps/cps -c set=pub -c psd=0 $i $i.clean
Using *.ar does not work as it just keeps over-writing the first file and gives no proper output. Can someone please help with a bash script.
The bash script I used without making a list and directly running in the directory is
#!env bash
for i in $#
do
outfile=$(basename $i).txt
echo $i
paz -r -L -e clean $i
psrplot -pF -j CDTp -j 'C max' -N2,1 -D $i.ps/cps -c set=pub -c psd=0 $i $i.clean
done
Please help, I have been trying for a while.
You want to process each file, one at a time. The safest way to do this is to use find ... -print0 with a while read .... Like this:
#!/bin/bash
#
ardir="/data"
# Basic validation
if [[ ! -d "$ardir" ]]
then
echo "ERROR: the directory ($ardir) does not exist."
exit 1
fi
# Process each file
find "$ardir" -type f -name "*.ar" -print0 | while IFS= read -r -d '' arfile
do
echo "DEBUG file=$arfile"
paz -r -L -e clean $arfile
psrplot -pF -j CDTp -j 'C max' -N2,1 -D $arfile.ps/cps -c set=pub -c psd=0 $arfile $arfile.clean
done
This method (and so many more!) is documented here: http://mywiki.wooledge.org/BashFAQ/001

Send files to folders using bash script

I want to copy the functionality of a windows program called files2folder, which basically lets you right-click a bunch of files and send them to their own individual folders.
So
1.mkv 2.png 3.doc
gets put into directories called
1 2 3
I have got it to work using this script but it throws out errors sometimes while still accomplishing what I want
#!/bin/bash
ls > list.txt
sed -i '/list.txt/d' ./list.txt
sed 's/.$//;s/.$//;s/.$//;s/.$//' ./list.txt > list2.txt
for i in $(cat list2.txt); do
mkdir $i
mv $i.* ./$i
done
rm *.txt
is there a better way of doing this? Thanks
EDIT: My script failed with real world filenames as they contained more than one . so I had to use a different sed command which makes it work. this is an example filename I'm working with
Captain.America.The.First.Avenger.2011.INTERNAL.2160p.UHD.BluRay.X265-IAMABLE
I guess you are getting errors on . and .. so change your call to ls to:
ls -A > list.txt
-A List all entries except for . and ... Always set for the super-user.
You don't have to create a file to achieve the same result, just assign the output of your ls command to a variable. Doing something like this:
files=`ls -A`
for file in $files; do
echo $file
done
You can also check if the resource is a file or directory like this:
files=`ls -A`
for res in $files; do
if [[ -d $res ]];
then
echo "$res is a folder"
fi
done
This script will do what you ask for:
files2folder:
#!/usr/bin/env sh
for file; do
dir="${file%.*}"
{ ! [ -f "$file" ] || [ "$file" = "$dir" ]; } && continue
echo mkdir -p -- "$dir"
echo mv -n -- "$file" "$dir/"
done
Example directory/files structure:
ls -1 dir/*.jar
dir/paper-279.jar
dir/paper.jar
Running the script above:
chmod +x ./files2folder
./files2folder dir/*.jar
Output:
mkdir -p -- dir/paper-279
mv -n -- dir/paper-279.jar dir/paper-279/
mkdir -p -- dir/paper
mv -n -- dir/paper.jar dir/paper/
To make it actually create the directories and move the files, remove all echo

why 'ls' command printing the directory content multiple times

I have the following shell script in which I want to check the specific directory content on the remote machines and print them in a file.
file=serverList.csv
n=0
while [ $n -le 2 ]
do
while IFS=: read -r f1 f2
do
# echo line is stored in $line
if echo $f1 | grep -q "xx.xx.xxx";
then
ssh user#$f1 ls path/*war_* > path/$f1.txt < /dev/null; ls path/*zip_* >> path/$f1.txt < /dev/null;
ssh user#$f1 ls -d /apps/jetty*_* >> path/$f1.txt < /dev/null;
fi
done < "$file"
sleep 15
n=$(( n+1 ))
done
I am using this script inside a cron job for every 2 minute as following:
*/2 * * * * /path/myscript.sh
but somehow I am ending up with the following output file:
/apps/jetty/webapps_wars/test_new.war
path/ReleaseTest.static.zip_2020-08-05
path/ReleaseTest.static.zip_2020-08-05
path/ReleaseTest.static.zip_2020-08-05
path/jetty_xx.xx_2020-08-05
path/jetty_new
path/jetty_xx.xx_2020-08-05
path/jetty_new
I am not sure why am I getting the files in the list twice, sometimes 3 times. but I execute the shell directly from putty, it works fine. What do I need to change in order to correct this script?
Example:
~$ cd tmp
~/tmp$ mkdir test
~/tmp$ cd !$
cd test
~/tmp/test$ mkdir -p apps/jetty/webapp_wars/ && touch apps/jetty/webapp_wars/test_new.war
~/tmp/test$ mkdir path
~/tmp/test$ touch path/{ReleaseTest.static.zip_2020-08-05,jetty_xx.xx_2020-08-05,jetty_new}
~/tmp/test$ cd ..
~/tmp$ listpath=$(find test/path \( -name "*2020-08-05" -o -name "*new" \) )
~/tmp$ listapps=$(find test/apps/ -name "*war" )
~/tmp$ echo ${listpath[#]}" "${listapps[#]} | tr " " "\n" | sort > resultfile
~/tmp$
~/tmp$ cat resultfile
test/apps/jetty/webapp_wars/test_new.war
test/path/jetty_new
test/path/jetty_xx.xx_2020-08-05
test/path/ReleaseTest.static.zip_2020-08-05
~/tmp$ rm -rf test/ && unset listapps && unset listpath && rm resultfile
~/tmp$
This way you get only one result for each pattern you are looking for in your if...then...else block of code.
Just adapt the ssh ..... find commands and take care of quotes & parentheses but there is the easiest solution, this way you do not have to rewrite the script from scratch. And be careful on local / remote variables if you use them.
You really should not use ls but the fundamental problem is probably that three separate commands with three separate wildcards could match the same file three times.
Also, one of your commands is executed locally (you forgot to put ssh etc in front of the second one), so if the wildcard matches on your local computer, that would produce a result which doesn't reflect the situation on the remote server.
Try this refactoring.
file=serverList.csv
n=0
while [ $n -le 2 ]
do
while IFS=: read -r f1 f2
do
# echo line is stored in $line <- XXX this is not true
if echo "$f1" | grep -q "xx.xx.xxx";
then
ssh user#$f1 "printf '%s\n' path/*war_* path/*zip_* /apps/jetty*_*" | sort -u >path/"$f1".txt < /dev/null
fi
done < "$file"
sleep 15
n=$(( n+1 ))
done
The sort gets rid of any duplicates. This assumes none of your file names contain newlines; if they do, you'd need to use something which robustly handles them (try printf '%s\0' and sort -z but these are not portable).
ls would definitely also accept three different wildcards but like the link above explains, you really never want to use ls in scripts.

find and gzip a directory recursively without a directory/file test

I'm working on improving our bash backup script, and would like to move away from rsync and towards using gzip and a "find since last run timestamp" system. I would like to have a mirror of the original tree, except have each destination file gzipped. However, if I pass a destination path to gzip that does not exist, it complains. I created the test below, but I can't believe that this is the most efficient solution. Am I going about this wrong?
Also, I'm not crazy about using while read either, but I can't get the right variable expansion with the alternatives I've tried, such as a for file in 'find' do.
Centos 6.x. Relevant snip below, simplified for focus:
cd /mnt/${sourceboxname}/${drive}/ && eval find . -newer timestamp | while read objresults;
do
if [[ -d "${objresults}" ]]
then
mkdir -p /backup/${sourceboxname}/${drive}${objresults}
else
cat /mnt/${sourceboxname}/${drive}/"${objresults}" | gzip -fc > /backup/${sourceboxname}/${drive}"${objresults}".gz
fi
done
touch timestamp #if no stderr
With proposed changes from my comments incorporated, I suggest this code:
#!/bin/bash
src="/mnt/$sourceboxname/$drive"
dst="/backup/$sourceboxname/$drive"
timestamp="$src/timestamp"
errors=$({ cd "$src" && find -newer "$timestamp" | while read objresults;
do
mkdir -p $(basename "$dst/$objresults")
[[ -d "$objresults" ]] || gzip -fc < "$objresults" > "$dst/$objresults.gz"
done; } 2>&1)
if [[ -z "$errors" ]]
then
touch "$timestamp"
else
echo "$errors" >&2
exit 1
fi

grep spacing error

Hi guys i've a problem with grep . I don't know if there is another search code in shell script.
I'm trying to backup a folder AhmetsFiles which is stored in my Flash Disk , but at the same time I've to group them by their extensions and save them into [extensionName] Folder.
AhmetsFiles
An example : /media/FlashDisk/AhmetsFiles/lecture.pdf must be stored in /home/$(whoami)/Desktop/backups/pdf
Problem is i cant copy a file which name contains spaces.(lecture 2.pptx)
After this introduction here my code.
filename="/media/FlashDisk/extensions"
count=0
exec 3<&0
exec 0< $filename
mkdir "/home/$(whoami)/Desktop/backups"
while read extension
do
cd "/home/$(whoami)/Desktop/backups"
rm -rf "$extension"
mkdir "$extension"
cd "/media/FlashDisk/AhmetsFiles"
files=( `ls | grep -i "$extension"` )
fCount=( `ls | grep -c -i "$extension"` )
for (( i=0 ; $i<$fCount ; i++ ))
do
cp -f "/media/FlashDisk/AhmetsFiles/${files[$i]}" "/home/$(whoami)/Desktop/backups/$extension"
done
let count++
done
exec 0<&3
exit 0
Your looping is way more complicated than it needs to be, no need for either ls or grep or the files and fCount variables:
for file in *.$extension
do
cp -f "/media/FlashDisk/AhmetsFiles/$file" "$HOME/Desktop/backups/$extension"
done
This works correctly with spaces.
I'm assuming that you actually wanted to interpret $extension as a file extension, not some random string in the middle of the filename like your original code does.
Why don't you
grep -i "$extension" | while IFS=: read x ; do
cp ..
done
instead?
Also, I believe you may prefer something like grep -i ".$extension$" instead (anchor it to the end of line).
On the other hand, the most optimal way is probably
cp -f /media/FlashDisk/AhmetsFiles/*.$extension "$HOME/Desktop/backups/$extension/"

Resources