Change the output directory on command line For Loop - for-loop

I have been working CSVKIT to build out a solution for parsing fixed-width data files to csv. I have put together the below code to iterate through all the files in a directory, but this same code also places the files back into the same directory it came from. As best practice I believe in using an 'IN' folder and an 'OUT' folder. I am also processing this command-line on a MAC.
for x in $(ls desktop/fixedwidth/*.txt); do x1=${x%%.*}; in2csv -f fixed -s desktop/ff/ala_schema.csv $x > $x1.csv; echo "$x1.csv done."; done
I feel that I am missing something and or that I need to change something within my snippet shown below, but I just can't put my finger on it.
x1=${x%%.*}
Any help on this would be wonderful and I thank you in advance.

When you run,
in2csv -f fixed -s desktop/ff/ala_schema.csv $x > $x1.csv
the output of your programm will be writing to the file, ${x1}.csv.
So, just set correct path to x1:
output_dir=/path/to/your/dir/
output_file=${output_dir}${x%%.*}.csv
in2csv -f fixed -s desktop/ff/ala_schema.csv $x > $output_file;
But, you should create your output_dir, before running this code. Otherwise you can receive an error, that directory doesn't exists.

Related

bash is zipping entire home

I am trying to back up a all world* folders from /home/mc/server/ and drop the zipped in /home/mc/backup/
#!/bin/bash
moment=$(date +"%Y%m%d%H%M")
backup="/home/mc/backup/map$moment.zip"
map="/home/mc/server/world*"
zipping="zip -r -9 $backup $map"
eval $zipping
The zipped file is created in backup folder as expected, but when I unzipped it contants the entire /home dir. I am running this bash in two ways:
Manually
Using user's crontab
Finally, If I put an echo of echo $zipping this prints correctly the command that I need to trigger. What am I missing? Thank you in advance.
There's no reason to use eval here (and no, justifying it on DRY grounds if you want to both log a command line and subsequently execute it does not count as a good reason IMO.)
Define a function and call it with the appropriate arguments:
#!/bin/bash
moment=$(date +"%Y%m%d%H%M")
zipping () {
output=$1
shift
zip -r -9 "$output" "$#"
}
zipping "/home/mc/backup/map$moment.zip" /home/mc/server/world*
(I'll admit, I don't know what is causing the behavior you report, but it would be better to confirm it is not somehow specific to the use of eval before trying to diagnose it further.)

Moving File to Desktop with Bash Script

I am trying to write a bash script that writes content to a file and then moves the file to my desktop. The first chunk of code, before the line break, works well and outputs the necessary content to a file called main_output.txt.
Moving this file is where I run into trouble. I know relative paths are best, but I'm using OSX El Capitan and relative paths in bash are not working as expected. Any ideas would be greatly appreciated!
#/bin/bash
OUTPUT="$(ls -1t ~/Desktop/directory/inputs|head -n 1)"
echo $OUTPUT > ~/Desktop/directory/outputs/main_output.txt
FINALFILE = ~/Desktop/directory/outputs/main_output.txt
DESTINATION = ~/Desktop/
mv $FINALFILE $DESTINATION
There must be no space between around the = sign.
Your OUTPUT is done right, which is why it works. Change your second part to this:
FINALFILE=~/Desktop/directory/outputs/main_output.txt
DESTINATION=~/Desktop/
maybe you'll have to put double quotes around your string.

bash downloading files ftp url

I working with a function to parse a file that has a list of desired file names to download. I'm using curl to download them but is there a better way? The output is shown which is okay but is there way for the output not be shown? Is there way to handle exceptions if the file isn't found and move on to the next file to be download if something happens? Might wanna ignore what I do for getting the proper link name, it was pain. The directory pattern has a pattern to what the name of the file is.
#!/bin/bash
# reads from the file and assigns to $MYARRAY and download to Downloads/*
FILENAME=$1
DLPATH=$2
VARIABLEDNA="DNA"
index=0
function Download {
VARL=$1
#VARL=$i
echo $VARL
VAR=${VARL,,}
echo $VAR
VAR2=${VAR:1:2}
echo $VAR2
HOST=ftp://ftp.wwpdb.org/pub/pdb/data/structures/divided/pdb/
HOSTCAT=$HOST$VAR2
FILECATB='/pdb'
FILECATE='.ent.gz'
NOSLASH='pdb'
DLADDR=$HOSTCAT$FILECATB$VAR$FILECATE
FILECATNAME=$NOSLASH$VAR$FILECATE
echo $DLADDR
curl -o Downloads/$FILECATNAME $DLADDR
gunzip Downloads/$FILECATNAME
}
mkdir -p Downloads
while read line ; do
MYARRAY[$index]="$line"
index=$(($index+1))
done < $FILENAME
echo "MYARRAY is: ${MYARRAY[*]}"
echo "Total pdbs in the file: ${index}"
for i in "${MYARRAY[#]}"
do
Download $i
done
I'm trying to write the log file to a folder that i made before the downloading but it doesn't seem to be making it in the folder. It writes to the root directory of the file that being executed and it doesn't write it correctly either. My syntax might be wrong??
curl -o Downloads/$FILECATNAME $DLADDR >> Downloads\LOGS\$LOGFILE 2>&1
Okey, first of all, I'm not sure if I got it all right, but I'll give it a try:
I'm using curl to download them but is there a better way?
I don't know a better one. You could use wget instead of curl, but curl is much more powerful.
The output is shown which is okay but is there way for the output not be shown?
You could use nohup (e.g. nohup curl -o Downloads/$FILECATNAME $DLADDR). If you don't redirect the output to a specific file, It will be stored in nohup.out. By adding an ampersand (&) at the end of your command you can also let it run in the background, so the command is still executed, even if you loose the connection to the server.
Is there way to handle exceptions if the file isn't found and move on to the next file to be download if something happens?
You could use something like test...exists.Or you could just check your nohup.out for errors or anything else with a grep.
I hope this helped you in any way.
cheers

Bash command route malfunction

Given this (among more...):
compile_coffee() {
echo "Compile COFFEESCRIPT files..."
i=0
for folder in ${COFFEE_FOLDER[*]}
do
for file in $folder/*.coffee
do
file_name=$(echo "$file" | awk -F "/" '{print $NF}' | awk -F "." '{print $1}')
file_destination_path=${COFFEE_DESTINATION_FOLDER[${i}]}
file_destination="$file_destination_path/$file_name.js"
if [ -f $file_path ]; then
echo "+ $file -> $file_destination"
$COFFEE_CMD $COFFEE_PARAMS $file > $file_destination #FAIL
#$COFFEE_CMD $COFFEE_PARAMS $file > testfile
fi
done
i=$i+1
done
echo "done!"
compress_javascript
}
And just to clarify, everything except the #FAIL line works flawessly, if I'm doing something wrong just tell me, the problem I have is:
the line executes and does what it have to do, but dont write the file that I put in "file_destination".
if a delete a folder in that route (it's relative to this script, see below), bash throws and error saying that the folder do not exist.
If I make the folder again, no errors, but no file either.
If I change the $file_destination to "testfile", it create the file with correct contents.
The $file_destination path its ok -as you can see, my script echoes it-
if I echo the entire line, copy the exact command with params and execute it onto a shell in the same directory the script is, it
works.
I don't know what is wrong with this, been wondering for two hours...
Script output (real paths):
(alpha)[pyron#vps herobrine]$ ./deploy.sh compile && ls -l database/static/js/
===============================
=== Compile ===
Compile COFFEESCRIPT files...
+ ./database/static/coffee/test.coffee -> ./database/static/js/test.js
done!
Linking static files to django staticfiles folder... done!
total 0
Complete command:
coffee --compile --print ./database/static/coffee/test.coffee > ./database/static/js/test.js
What am I missing?
EDIT I've made some progression through this.
In the shell, If I deactivate the python virtualenv the script works, but If I call deactivate from the script it says command not found.
Assuming destination files have no characters as spaces in their names, directories exist etc. I'd try adding 2>&1 e.g.
$COFFEE_CMD $COFFEE_PARAMS $file > testfile 2>&1
compilers may put desired output and/or compilation messages on stderr instead of stdout. You may also want to put full path to coffee , e.g. /usr/bin/coffee instead of just compiler name.
Found that the problem wasn't the bash script itself. A few lines later the deploy script perform the collectstatic method from django. Noticed that until that line the files were there, I started reading that the collecstatic have a cache system. A very weird one IMO, since I have to delete all the static files and start from scratch to have the script working.
So... the problem wasn't the bash script but the django cache system. Im not givin' reputation to me anyways.
The full deploy script is here: https://github.com/pyronhell/deploy-script-boilerplate and everyone is welcome if you can improve it.
Cheers.

Loop through a directory with Grep (newbie)

I'm trying to do loop through the current directory that the script resides in, which has a bunch of files that end with _list.txt I would like to grep each file name and assign it to a variable and then execute some additional commands and then move on to the next file until there are no more _list.txt files to be processed.
I assume I want something like:
while file_name=`grep "*_list.txt" *`
do
Some more code
done
But this doesn't work as expected. Any suggestions of how to accomplish this newbie task?
Thanks in advance.
If I understand you problem correctly, you don't need a grep. You can just do:
for file in *_list.txt
do
# use $file, like echo $file
done
grep is one of the most useful commands of Unix. You must comprehend it well; see some useful examples here. As far as your current requirement, I think following code will be useful:
for file in *.*
do
echo "Happy Programming"
done
In place of *.* you can also use regular expressions. For more such useful examples, see First Time Linux, or read all grep options at your terminal using man grep.

Resources