bash is zipping entire home - bash

I am trying to back up a all world* folders from /home/mc/server/ and drop the zipped in /home/mc/backup/
#!/bin/bash
moment=$(date +"%Y%m%d%H%M")
backup="/home/mc/backup/map$moment.zip"
map="/home/mc/server/world*"
zipping="zip -r -9 $backup $map"
eval $zipping
The zipped file is created in backup folder as expected, but when I unzipped it contants the entire /home dir. I am running this bash in two ways:
Manually
Using user's crontab
Finally, If I put an echo of echo $zipping this prints correctly the command that I need to trigger. What am I missing? Thank you in advance.

There's no reason to use eval here (and no, justifying it on DRY grounds if you want to both log a command line and subsequently execute it does not count as a good reason IMO.)
Define a function and call it with the appropriate arguments:
#!/bin/bash
moment=$(date +"%Y%m%d%H%M")
zipping () {
output=$1
shift
zip -r -9 "$output" "$#"
}
zipping "/home/mc/backup/map$moment.zip" /home/mc/server/world*
(I'll admit, I don't know what is causing the behavior you report, but it would be better to confirm it is not somehow specific to the use of eval before trying to diagnose it further.)

Related

Create file, but fail if it exists, with bash [duplicate]

In system call open(), if I open with O_CREAT | O_EXCL, the system call ensures that the file will only be created if it does not exist. The atomicity is guaranteed by the system call. Is there a similar way to create a file in an atomic fashion from a bash script?
UPDATE:
I found two different atomic ways
Use set -o noclobber. Then you can use > operator atomically.
Just use mkdir. Mkdir is atomic
A 100% pure bash solution:
set -o noclobber
{ > file ; } &> /dev/null
This command creates a file named file if there's no existent file named file. If there's a file named file, then do nothing (but return a non-zero return code).
Pros of > over the touch command:
Doesn't update timestamp if file already existed
100% bash builtin
Return code as expected: fail if file already existed or if file couldn't be created; success if file didn't exist and was created.
Cons:
need to set the noclobber option (but it's okay in a script, if you're careful with redirections, or unset it afterwards).
I guess this solution is really the bash counterpart of the open system call with O_CREAT | O_EXCL.
Here's a bash function using the mv -n trick:
function mkatomic() {
f="$(mktemp)"
mv -n "$f" "$1"
if [ -e "$f" ]; then
rm "$f"
echo "ERROR: file exists:" "$1" >&2
return 1
fi
}
Examples:
$ mkatomic foo
$ wc -c foo
0 foo
$ mkatomic foo
ERROR: file exists: foo
You could create it under a randomly-generated name, then rename (mv -n random desired) it into place with the desired name. The rename will fail if the file already exists.
Like this:
#!/bin/bash
touch randomFileName
mv -n randomFileName lockFile
if [ -e randomFileName ] ; then
echo "Failed to acquired lock"
else
echo "Acquired lock"
fi
Just to be clear, ensuring the file will only be created if it doesn't exist is not the same thing as atomicity. The operation is atomic if and only if, when two or more separate threads attempt to do the same thing at the same time, exactly one will succeed and all others will fail.
The best way I know of to create a file atomically in a shell script follows this pattern (and it's not perfect):
create a file that has an extremely high chance of not existing (using a decent random number selection or something in the file name), and place some unique content in it (something that no other thread would have - again, a random number or something)
verify that the file exists and contains the contents you expect it to
create a hard link from that file to the desired file
verify that the desired file contains the expected contents
In particular, touch is not atomic, since it will create the file if it's not there, or simply update the timestamp. You might be able to play games with different timestamps, but reading and parsing a timestamp to see if you "won" the race is harder than the above. mkdir can be atomic, but you would have to check the return code, because otherwise, you can only tell that "yes, the directory was created, but I don't know which thread won". If you're on a file system that doesn't support hard links, you might have to settle for a less ideal solution.
Another way to do this is to use umask to try to create the file and open it for writing, without creating it with write permissions, like this:
LOCK_FILE=only_one_at_a_time_please
UMASK=$(umask)
umask 777
echo "$$" > "$LOCK_FILE"
umask "$UMASK"
trap "rm '$LOCK_FILE'" EXIT
If the file is missing, the script will succeed at creating and opening it for writing, despite the file being created without writing permissions. If it already exists, the script won't be able to open the file for writing. It would be possible to use exec to open the file and keep the file descriptor around.
rm requires you to have write permissions to the directory itself, without regards to file permissions.
touch is the command you are looking for. It updates timestamps of the provided file if the file exists or creates it if it doesn't.

Change the output directory on command line For Loop

I have been working CSVKIT to build out a solution for parsing fixed-width data files to csv. I have put together the below code to iterate through all the files in a directory, but this same code also places the files back into the same directory it came from. As best practice I believe in using an 'IN' folder and an 'OUT' folder. I am also processing this command-line on a MAC.
for x in $(ls desktop/fixedwidth/*.txt); do x1=${x%%.*}; in2csv -f fixed -s desktop/ff/ala_schema.csv $x > $x1.csv; echo "$x1.csv done."; done
I feel that I am missing something and or that I need to change something within my snippet shown below, but I just can't put my finger on it.
x1=${x%%.*}
Any help on this would be wonderful and I thank you in advance.
When you run,
in2csv -f fixed -s desktop/ff/ala_schema.csv $x > $x1.csv
the output of your programm will be writing to the file, ${x1}.csv.
So, just set correct path to x1:
output_dir=/path/to/your/dir/
output_file=${output_dir}${x%%.*}.csv
in2csv -f fixed -s desktop/ff/ala_schema.csv $x > $output_file;
But, you should create your output_dir, before running this code. Otherwise you can receive an error, that directory doesn't exists.

bash downloading files ftp url

I working with a function to parse a file that has a list of desired file names to download. I'm using curl to download them but is there a better way? The output is shown which is okay but is there way for the output not be shown? Is there way to handle exceptions if the file isn't found and move on to the next file to be download if something happens? Might wanna ignore what I do for getting the proper link name, it was pain. The directory pattern has a pattern to what the name of the file is.
#!/bin/bash
# reads from the file and assigns to $MYARRAY and download to Downloads/*
FILENAME=$1
DLPATH=$2
VARIABLEDNA="DNA"
index=0
function Download {
VARL=$1
#VARL=$i
echo $VARL
VAR=${VARL,,}
echo $VAR
VAR2=${VAR:1:2}
echo $VAR2
HOST=ftp://ftp.wwpdb.org/pub/pdb/data/structures/divided/pdb/
HOSTCAT=$HOST$VAR2
FILECATB='/pdb'
FILECATE='.ent.gz'
NOSLASH='pdb'
DLADDR=$HOSTCAT$FILECATB$VAR$FILECATE
FILECATNAME=$NOSLASH$VAR$FILECATE
echo $DLADDR
curl -o Downloads/$FILECATNAME $DLADDR
gunzip Downloads/$FILECATNAME
}
mkdir -p Downloads
while read line ; do
MYARRAY[$index]="$line"
index=$(($index+1))
done < $FILENAME
echo "MYARRAY is: ${MYARRAY[*]}"
echo "Total pdbs in the file: ${index}"
for i in "${MYARRAY[#]}"
do
Download $i
done
I'm trying to write the log file to a folder that i made before the downloading but it doesn't seem to be making it in the folder. It writes to the root directory of the file that being executed and it doesn't write it correctly either. My syntax might be wrong??
curl -o Downloads/$FILECATNAME $DLADDR >> Downloads\LOGS\$LOGFILE 2>&1
Okey, first of all, I'm not sure if I got it all right, but I'll give it a try:
I'm using curl to download them but is there a better way?
I don't know a better one. You could use wget instead of curl, but curl is much more powerful.
The output is shown which is okay but is there way for the output not be shown?
You could use nohup (e.g. nohup curl -o Downloads/$FILECATNAME $DLADDR). If you don't redirect the output to a specific file, It will be stored in nohup.out. By adding an ampersand (&) at the end of your command you can also let it run in the background, so the command is still executed, even if you loose the connection to the server.
Is there way to handle exceptions if the file isn't found and move on to the next file to be download if something happens?
You could use something like test...exists.Or you could just check your nohup.out for errors or anything else with a grep.
I hope this helped you in any way.
cheers

Bash command route malfunction

Given this (among more...):
compile_coffee() {
echo "Compile COFFEESCRIPT files..."
i=0
for folder in ${COFFEE_FOLDER[*]}
do
for file in $folder/*.coffee
do
file_name=$(echo "$file" | awk -F "/" '{print $NF}' | awk -F "." '{print $1}')
file_destination_path=${COFFEE_DESTINATION_FOLDER[${i}]}
file_destination="$file_destination_path/$file_name.js"
if [ -f $file_path ]; then
echo "+ $file -> $file_destination"
$COFFEE_CMD $COFFEE_PARAMS $file > $file_destination #FAIL
#$COFFEE_CMD $COFFEE_PARAMS $file > testfile
fi
done
i=$i+1
done
echo "done!"
compress_javascript
}
And just to clarify, everything except the #FAIL line works flawessly, if I'm doing something wrong just tell me, the problem I have is:
the line executes and does what it have to do, but dont write the file that I put in "file_destination".
if a delete a folder in that route (it's relative to this script, see below), bash throws and error saying that the folder do not exist.
If I make the folder again, no errors, but no file either.
If I change the $file_destination to "testfile", it create the file with correct contents.
The $file_destination path its ok -as you can see, my script echoes it-
if I echo the entire line, copy the exact command with params and execute it onto a shell in the same directory the script is, it
works.
I don't know what is wrong with this, been wondering for two hours...
Script output (real paths):
(alpha)[pyron#vps herobrine]$ ./deploy.sh compile && ls -l database/static/js/
===============================
=== Compile ===
Compile COFFEESCRIPT files...
+ ./database/static/coffee/test.coffee -> ./database/static/js/test.js
done!
Linking static files to django staticfiles folder... done!
total 0
Complete command:
coffee --compile --print ./database/static/coffee/test.coffee > ./database/static/js/test.js
What am I missing?
EDIT I've made some progression through this.
In the shell, If I deactivate the python virtualenv the script works, but If I call deactivate from the script it says command not found.
Assuming destination files have no characters as spaces in their names, directories exist etc. I'd try adding 2>&1 e.g.
$COFFEE_CMD $COFFEE_PARAMS $file > testfile 2>&1
compilers may put desired output and/or compilation messages on stderr instead of stdout. You may also want to put full path to coffee , e.g. /usr/bin/coffee instead of just compiler name.
Found that the problem wasn't the bash script itself. A few lines later the deploy script perform the collectstatic method from django. Noticed that until that line the files were there, I started reading that the collecstatic have a cache system. A very weird one IMO, since I have to delete all the static files and start from scratch to have the script working.
So... the problem wasn't the bash script but the django cache system. Im not givin' reputation to me anyways.
The full deploy script is here: https://github.com/pyronhell/deploy-script-boilerplate and everyone is welcome if you can improve it.
Cheers.

Can a shell script indicate that its lines be loaded into memory initially?

UPDATE: this is a repost of How to make shell scripts robust to source being changed as they run
This is a little thing that bothers me every now and then:
I write a shell script (bash) for a quick and dirty job
I run the script, and it runs for quite a while
While it's running, I edit a few lines in the script, configuring it for a different job
But the first process is still reading the same script file and gets all screwed up.
Apparently, the script is interpreted by loading each line from the file as it is needed. Is there some way that I can have the script indicate to the shell that the entire script file should be read into memory all at once? For example, Perl scripts seem to do this: editing the code file does not affect a process that's currently interpreting it (because it's initially parsed/compiled?).
I understand that there are many ways I could get around this problem. For example, I could try something like:
cat script.sh | sh
or
sh -c "`cat script.sh`"
... although those might not work correctly if the script file is large and there are limits on the size of stream buffers and command-line arguments. I could also write an auxiliary wrapper that copies a script file to a locked temporary file and then executes it, but that doesn't seem very portable.
So I was hoping for the simplest solution that would involve modifications only to the script, not the way in which it is invoked. Can I just add a line or two at the start of the script? I don't know if such a solution exists, but I'm guessing it might make use of the $0 variable...
The best answer I've found is a very slight variation on the solutions offered to How to make shell scripts robust to source being changed as they run. Thanks to camh for noting the repost!
#!/bin/sh
{
# Your stuff goes here
exit
}
This ensures that all of your code is parsed initially; note that the 'exit' is critical to ensuring that the file isn't accessed later to see if there are additional lines to interpret. Also, as noted on the previous post, this isn't a guarantee that other scripts called by your script will be safe.
Thanks everyone for the help!
Use an editor that doesn't modify the existing file, and instead creates a new file then replaces the old file. For example, using :set writebackup backupcopy=no in Vim.
How about a solution to how you edit it.
If the script is running, before editing it, do this:
mv script script-old
cp script-old script
rm script-old
Since the shell keep's the file open as long as you don't change the contents of the open inode everything will work okay.
The above works because mv will preserve the old inode while cp will create a new one. Since a file's contents will not actually be removed if it is opened, you can remove it right away and it will be cleaned up once the shell closes the file.
According to the bash documentation if instead of
#!/bin/bash
body of script
you try
#!/bin/bash
script=$(cat <<'SETVAR'
body of script
SETVAR)
eval "$script"
then I think you will be in business.
Consider creating a new bang path for your quick-and-dirty jobs. If you start your scripts with:
#!/usr/local/fastbash
or something, then you can write a fastbash wrapper that uses one of the methods you mentioned. For portability, one can just create a symlink from fastbash to bash, or have a comment in the script saying one can replace fastbash with bash.
If you use Emacs, try M-x customize-variable break-hardlink-on-save. Setting this variable will tell Emacs to write to a temp file and then rename the temp file over the original instead of editing the original file directly. This should allow the running instance to keep its unmodified version while you save the new version.
Presumably, other semi-intelligent editors would have similar options.
A self contained way to make a script resistant to this problem is to have the script copy and re-execute itself like this:
#!/bin/bash
if [[ $0 != /tmp/copy-* ]] ; then
rm -f /tmp/copy-$$
cp $0 /tmp/copy-$$
exec /tmp/copy-$$ "$#"
echo "error copying and execing script"
exit 1
fi
rm $0
# rest of script...
(This will not work if the original script begins with the characters /tmp/copy-)
(This is inspired by R Samuel Klatchko's answer)

Resources