Need a script that copies a single file to multiple directories - bash

I'm stuck with this script and would like some help on the same!
I want to make a folder called "upload" which will contain a script that copies a .jar file from there to multiple directories (See below)
/home/minecraft/multicraft/servers/EUSim1
/home/minecraft/multicraft/servers/EUSim2
/home/minecraft/multicraft/servers/EUSim3
and so on.

A Quick Script, for reference:
script
#!/bin/bash
inputfile=$1
for var in "$#"
do
if [[ $2 == $3 ]];then
exit 1
fi
cp -v $inputfile $2
shift
done
Command
./script simple.jar \
/home/minecraft/multicraft/servers/EUSim1/simple.jar \
/home/minecraft/multicraft/servers/EUSim2/simple.jar \
/home/minecraft/multicraft/servers/EUSim3/simple.jar \
Output
'simple.jar' -> '/home/minecraft/multicraft/servers/EUSim1/simple.jar'
'simple.jar' -> '/home/minecraft/multicraft/servers/EUSim2/simple.jar'
'simple.jar' -> '/home/minecraft/multicraft/servers/EUSim3/simple.jar'
This is a simple script. you can do little tweaks and add --prefix or make the script read the input from a file.
(or)
use cp with xargs:
echo dir1 dir2 dir3 | xargs -n 1 cp file

Very simple script:
How to use it
touch simpleScript.sh
vim simpleScript.sh
Copy/Paste the line below
Update TRX_SOURCE_PATH, DEST_PATH, DEST_PATH1, DEST_PATH2
Save
chmod +x ./simpleScript.sh
#!/bin/bash
TRX_SOURCE_PATH='/Path/Test.pdf'
DEST_PATH='/Path/Test'
DEST_PATH1='/Path/Test1'
DEST_PATH2='/Path/Test2'
echo "Starting copy"
echo "Destination:" $DEST_PATH
cp $TRX_SOURCE_PATH $DEST_PATH
echo "copy done for folder:" $DEST_PATH
echo "Destination:" $DEST_PATH1
cp $TRX_SOURCE_PATH $DEST_PATH1
echo "copy done for folder:" $DEST_PATH1
echo "Destination:" $DEST_PATH2
cp $TRX_SOURCE_PATH $DEST_PATH2
echo "copy done for folder:" $DEST_PATH2
echo "All Copy done"
Hope this script helps you .

Related

Send files to folders using bash script

I want to copy the functionality of a windows program called files2folder, which basically lets you right-click a bunch of files and send them to their own individual folders.
So
1.mkv 2.png 3.doc
gets put into directories called
1 2 3
I have got it to work using this script but it throws out errors sometimes while still accomplishing what I want
#!/bin/bash
ls > list.txt
sed -i '/list.txt/d' ./list.txt
sed 's/.$//;s/.$//;s/.$//;s/.$//' ./list.txt > list2.txt
for i in $(cat list2.txt); do
mkdir $i
mv $i.* ./$i
done
rm *.txt
is there a better way of doing this? Thanks
EDIT: My script failed with real world filenames as they contained more than one . so I had to use a different sed command which makes it work. this is an example filename I'm working with
Captain.America.The.First.Avenger.2011.INTERNAL.2160p.UHD.BluRay.X265-IAMABLE
I guess you are getting errors on . and .. so change your call to ls to:
ls -A > list.txt
-A List all entries except for . and ... Always set for the super-user.
You don't have to create a file to achieve the same result, just assign the output of your ls command to a variable. Doing something like this:
files=`ls -A`
for file in $files; do
echo $file
done
You can also check if the resource is a file or directory like this:
files=`ls -A`
for res in $files; do
if [[ -d $res ]];
then
echo "$res is a folder"
fi
done
This script will do what you ask for:
files2folder:
#!/usr/bin/env sh
for file; do
dir="${file%.*}"
{ ! [ -f "$file" ] || [ "$file" = "$dir" ]; } && continue
echo mkdir -p -- "$dir"
echo mv -n -- "$file" "$dir/"
done
Example directory/files structure:
ls -1 dir/*.jar
dir/paper-279.jar
dir/paper.jar
Running the script above:
chmod +x ./files2folder
./files2folder dir/*.jar
Output:
mkdir -p -- dir/paper-279
mv -n -- dir/paper-279.jar dir/paper-279/
mkdir -p -- dir/paper
mv -n -- dir/paper.jar dir/paper/
To make it actually create the directories and move the files, remove all echo

shell script for checking new files [duplicate]

I want to run a shell script when a specific file or directory changes.
How can I easily do that?
You may try entr tool to run arbitrary commands when files change. Example for files:
$ ls -d * | entr sh -c 'make && make test'
or:
$ ls *.css *.html | entr reload-browser Firefox
or print Changed! when file file.txt is saved:
$ echo file.txt | entr echo Changed!
For directories use -d, but you've to use it in the loop, e.g.:
while true; do find path/ | entr -d echo Changed; done
or:
while true; do ls path/* | entr -pd echo Changed; done
I use this script to run a build script on changes in a directory tree:
#!/bin/bash -eu
DIRECTORY_TO_OBSERVE="js" # might want to change this
function block_for_change {
inotifywait --recursive \
--event modify,move,create,delete \
$DIRECTORY_TO_OBSERVE
}
BUILD_SCRIPT=build.sh # might want to change this too
function build {
bash $BUILD_SCRIPT
}
build
while block_for_change; do
build
done
Uses inotify-tools. Check inotifywait man page for how to customize what triggers the build.
Use inotify-tools.
The linked Github page has a number of examples; here is one of them.
#!/bin/sh
cwd=$(pwd)
inotifywait -mr \
--timefmt '%d/%m/%y %H:%M' --format '%T %w %f' \
-e close_write /tmp/test |
while read -r date time dir file; do
changed_abs=${dir}${file}
changed_rel=${changed_abs#"$cwd"/}
rsync --progress --relative -vrae 'ssh -p 22' "$changed_rel" \
usernam#example.com:/backup/root/dir && \
echo "At ${time} on ${date}, file $changed_abs was backed up via rsync" >&2
done
How about this script? Uses the 'stat' command to get the access time of a file and runs a command whenever there is a change in the access time (whenever file is accessed).
#!/bin/bash
while true
do
ATIME=`stat -c %Z /path/to/the/file.txt`
if [[ "$ATIME" != "$LTIME" ]]
then
echo "RUN COMMNAD"
LTIME=$ATIME
fi
sleep 5
done
Check out the kernel filesystem monitor daemon
http://freshmeat.net/projects/kfsmd/
Here's a how-to:
http://www.linux.com/archive/feature/124903
As mentioned, inotify-tools is probably the best idea. However, if you're programming for fun, you can try and earn hacker XPs by judicious application of tail -f .
Just for debugging purposes, when I write a shell script and want it to run on save, I use this:
#!/bin/bash
file="$1" # Name of file
command="${*:2}" # Command to run on change (takes rest of line)
t1="$(ls --full-time $file | awk '{ print $7 }')" # Get latest save time
while true
do
t2="$(ls --full-time $file | awk '{ print $7 }')" # Compare to new save time
if [ "$t1" != "$t2" ];then t1="$t2"; $command; fi # If different, run command
sleep 0.5
done
Run it as
run_on_save.sh myfile.sh ./myfile.sh arg1 arg2 arg3
Edit: Above tested on Ubuntu 12.04, for Mac OS, change the ls lines to:
"$(ls -lT $file | awk '{ print $8 }')"
Add the following to ~/.bashrc:
function react() {
if [ -z "$1" -o -z "$2" ]; then
echo "Usage: react <[./]file-to-watch> <[./]action> <to> <take>"
elif ! [ -r "$1" ]; then
echo "Can't react to $1, permission denied"
else
TARGET="$1"; shift
ACTION="$#"
while sleep 1; do
ATIME=$(stat -c %Z "$TARGET")
if [[ "$ATIME" != "${LTIME:-}" ]]; then
LTIME=$ATIME
$ACTION
fi
done
fi
}
Quick solution for fish shell users who wanna track a single file:
while true
set old_hash $hash
set hash (md5sum file_to_watch)
if [ $hash != $old_hash ]
command_to_execute
end
sleep 1
end
replace md5sum with md5 if on macos.
Here's another option: http://fileschanged.sourceforge.net/
See especially "example 4", which "monitors a directory and archives any new or changed files".
inotifywait can satisfy you.
Here is a common sample for it:
inotifywait -m /path -e create -e moved_to -e close_write | # -m is --monitor, -e is --event
while read path action file; do
if [[ "$file" =~ .*rst$ ]]; then # if suffix is '.rst'
echo ${path}${file} ': '${action} # execute your command
echo 'make html'
make html
fi
done
Suppose you want to run rake test every time you modify any ruby file ("*.rb") in app/ and test/ directories.
Just get the most recent modified time of the watched files and check every second if that time has changed.
Script code
t_ref=0; while true; do t_curr=$(find app/ test/ -type f -name "*.rb" -printf "%T+\n" | sort -r | head -n1); if [ $t_ref != $t_curr ]; then t_ref=$t_curr; rake test; fi; sleep 1; done
Benefits
You can run any command or script when the file changes.
It works between any filesystem and virtual machines (shared folders on VirtualBox using Vagrant); so you can use a text editor on your Macbook and run the tests on Ubuntu (virtual box), for example.
Warning
The -printf option works well on Ubuntu, but do not work in MacOS.

Bash script for inotifywait - How to write deletes to log file, and cp close_writes?

I have this bash script:
#!/bin/bash
inotifywait -m -e close_write --exclude '\*.sw??$' . |
#adding --format %f does not work for some reason
while read dir ev file; do
cp ./"$file" zinot/"$file"
done
~
Now, how would I have it do the same thing but also handle deletes by writing the filenames to a log file?
Something like?
#!/bin/bash
inotifywait -m -e close_write --exclude '\*.sw??$' . |
#adding --format %f does not work for some reason
while read dir ev file; do
# if DELETE, append $file to /inotify.log
# else
cp ./"$file" zinot/"$file"
done
~
EDIT:
By looking at the messages generated, I found that inotifywait generates CLOSE_WRITE,CLOSE whenever a file is closed. So that is what I'm now checking in my code.
I tried also checking for DELETE, but for some reason that section of the code is not working. Check it out:
#!/bin/bash
fromdir=/path/to/directory/
inotifywait -m -e close_write,delete --exclude '\*.sw??$' "$fromdir" |
while read dir ev file; do
if [ "$ev" == 'CLOSE_WRITE,CLOSE' ]
then
# copy entire file to /root/zinot/ - WORKS!
cp "$fromdir""$file" /root/zinot/"$file"
elif [ "$ev" == 'DELETE' ]
then
# trying this without echo does not work, but with echo it does!
echo "$file" >> /root/zinot.txt
else
# never saw this error message pop up, which makes sense.
echo Could not perform action on "$ev"
fi
done
In the dir, I do touch zzzhey.txt. File is copied. I do vim zzzhey.txt and file changes are copied. I do rm zzzhey.txt and the filename is added to my log file zinot.txt. Awesome!
You need to add -e delete to your monitor, otherwise DELETE events won't be passed to the loop. Then add a conditional to the loop that handles the events. Something like this should do:
#!/bin/bash
inotifywait -m -e delete -e close_write --exclude '\*.sw??$' . |
while read dir ev file; do
if [ "$ev" = "DELETE" ]; then
echo "$file" >> /inotify.log
else
cp ./"$file" zinot/"$file"
fi
done

Recursively copying a file into multiple directories, if a directory does not exist in Bash

so I need to copy the file /home/servers/template/craftbukkit.jar into every folder inside of /home/servers, Ex. /home/servers/server1, /home/servers/server2, etc.
But I only want to do it if /home/servers/whateverserveritiscurrentlyon/mods does not exsist. This is what I came up with and was wondering if it will work:
echo " Script to copy a file to all server directories, only if mods does not exist in that directory"
for i in /home/servers/*/; do
if [ ! -d "$i/mods" ]; then
cp -f /home/servers/template/craftbukkit.jar "$i"
fi
done
echo " completed script ..."
Looks like it should work. To non-destructively test, change the cp -f ... line to say echo cp -f ... and review the output.
It could also be somewhat shortened, but it wouldn't affect efficiency much:
for i in /home/servers/*/
do
[[ -d "${i}/mods" ]] || cp -f /home/servers/template/craftbukkit.jar "${i}/."
done

Shell Scripts with conditional echos

How do I exclude what my script is doing and only have echo's print?
For instance, i am taring a directory and I don't want every file it tar's to echo.. only the echo command.
#! /bin/bash
clear
echo "Compressing the files"
cd ~/LegendaryXeo/html/
tar --exclude=".git" -cvf site.tar *
mv site.tar ~/LegendaryXeo/work/
cd ~/LegendaryXeo/work/
clear
echo "Extracting the site"
tar -xvf site.tar
echo "Deleting Tar"
cd ~/LegendaryXeo/work/
rm -f site.tar
clear
echo "Copying files to server"
scp -r ~/LegendaryXeo/work/* user#site.com:~/../../domains/
Redirect the output of tar to /dev/null:
tar [your options] [files] &> /dev/null
Any echo command you have in your script will still output to the screen.

Resources