bash not adding current date to file name - bash

I have a bash script which backups my source code on a 10 minute basis thru crontab. Script was working until the end of August. It's not working since September 1st. This is the script:
#!/bin/sh
date=`date +%e-%m-%y`
cd /home/neky/python
tar -zcf lex.tar.gz lex/
echo $date
mv lex.tar.gz lex-$date.tar.gz
mv lex-$date.tar.gz /home/neky/Dropbox/lex/lex-$date.tar.gz
If I execute it manually, it print the current date 4-09-12, and this error mv: target ‘4-09-12.tar.gz’ is not a directory
What could be the problem?

Your date contains a space when the day of month is a single digit (which also explains why it only stopped working in the new month). That results in your command being split up, i.e.
# this is what it you end up with
mv lex.tar.gz lex- 4-09-12.tar.gz
Use date +%d-%m-%y instead which will give you 04-09-12 (note %d instead of %e).
If you really want a space in the name, you'll need to quote your variables, i.e.:
mv lex.tar.gz "lex-$date.tar.gz"
mv "lex-$date.tar.gz" /home/neky/Dropbox/lex/

The character % (part of your date format) is a special one in cron scripts, so you need to escape it.

Related

Unix using formatted date based on variable with wget

I have a script that i scheduled for weekdays with cron. The script uses wget to download a file every business day. The url is dynamic based on the date. I created the following script:
day_or_weekend=`date +%w`
if [$day_or_weekend == 1] ; then
look_back=3
else
look_back=1
fi
wget -O file_name_`date +%Y%m%d`.csv https://website/filename/date/`date $look_back days ago + %Y-%m-%d`/type/csv
This script generates the file with the correct name but the contents are empty. I'm quite new to writing bash shell scripts so im not sure how to go about debugging this. So my questions are:
Am I defining the look_back variable correctly?
Am I correctly adding the date parameter to the wget url?
This should solve your blank issues. Command lines are very particular about spacing and quoting.
wget -O "file_name_$(date +%Y%m%d).csv" \
"https://website/filename/date/$(date -d "$look_back days ago" +%Y-%m-%d)/type/csv"
Your logic to determine the previous work day will function as designed so long as you use busybox or GNU date (BSD date must subtract time in seconds from the epoch representation, basic Posix date can't actually do this).

Merging CSV files based on filename filter

I'm trying to develop a bash script which filters csv files (generated every hour) for a day before and merge them into a single CSV file. This script seems to do the job for me, except that I'm trying to filter files based on their filenames.
There would be 24 files for each day in the directory, and I need to filter out these files based on their name format:
foofoo_2017052101502.csv
foofoo_2017052104502.csv
foofoo_2017052104503.csv
foofoo_2017052204501.csv
foofoo_2017052204504.csv
Here, I need to filter out for May 21, 2017. So my output CSV files must have the first three .csv files.
What should I add in the script for this filter?
The following script will calculate the previous day yyyymmdd and use that value in the grep to automatically filter out all the file names generated the previous day.
For MacOS
dt=`date -j -v-1d +%Y%m%d`
echo $dt
OutputFiles=`ls | grep foofoo_${dt}`
For Linux
dt=`date -d "yesterday" +%Y%m%d`
echo $dt
OutputFiles=`ls | grep foofoo_${dt}`
These commands when added to the script mentioned will filter the file names for the previous day based upon the current time stamp.
You can let bash do the filtering for you using globbing, for example to list only files with date May 21, 2017 you could use:
for filename in foofoo_20170521*.csv; do...
If you want to be able to call your script with an argument specifying the date to have more flexibility, you can use:
for filename in "foofoo_${1}*.csv"; do...
And then call your script with the date that you want to filter as an argument:
./your_script 20170521
And as #David C. Rankin mention in the comments, a very practical way to do it would be to concatenate all the files from the date you want into one csv that you would then use in your script:
cat foofoo_20170521*.csv > combined_20170521.csv

Bash - File name change Date + 1

I have around 500 files that I need to rename with the date the report represents. The filenames are currently:
WUSR1453722998383.csv
WUSR1453723010659.csv
WUSR1453723023497.csv
And so on. The numbers in the filename have nothing to do with the date, so I cannot use the filename as a guide for what the file should be renamed to. The reports start from 02/12/2014 and there is a report for every day of the month up until yesterday (09/04/2016). Luckily as well the filename is sequential - so 04/12/2014 will have a higher number than 03/12/2014 which will have a higher number than 02/12/2014. This means the files are automatically listed in alphabetical order.
There is however a date in the first line of the CSV before the data:
As at Date,2014-12-02
Now I've checked that I have all the files already and I do, so what's the best way to rename there to the date? I can either set the starting date as 02/12/2014 and rename each file as a +1 date or the script can read the date on the first line of the file (As at Date,2014-12-02 for example) and use that date to rename the file.
I have no idea how to write either of the method above in bash, so if you could help out with this, that would be really appreciated.
In terms of file output, I was hoping for:
02-12-2014.csv
03-12-2014.csv
And so on
Is that the answer you need? Assume all the file are under current directory. Do some testings before you do the real operation. The condition is every date string at your cvs file is unique. There will be some files be overwritten otherwise.
#!/bin/bash
for f in *.csv
do
o=$(sed '1q' $f |awk -F"[,-]" '{print $NF"-"$(NF-1)"-"$(NF-2)".csv"}')
# should we backup the file?
# cp $f ${f}.bak
mv $f $o
done

S3cmd move file and del folder

I'm trying to write a bash script to automate my backup plan. I use a script which creates a S3 folder each day with the day as folder name. And Each hour he uploads a backup in this folder.
exemple: /Application1/20130513/dump.01
My backup plan is to keep 2 days of full backup(each hour) and keep 1 backup by day for the latest 15 days in a s3 folder ("oldbackup").
What is wrong in my script?
#check and clean the S3 bucket
BUCKETNAME='application1';
FOLDERLIST = s3cmd ls s3://$BUCKETNAME
LIMITFOLDER = date --date='1 days ago' +'%Y%m%d'
for f in $FOLDERLIST
do
if [[ ${f} > $LIMITFOLDER && f != "oldbackup" ]]; then
s3cmd sync s3://$BUCKETNAME/$f/dump.rdb.0 s3://$BUCKETNAME/"oldbackup"/dump.rdb.$f
s3cmd del s3://$BUCKETNAME/$f --recursive;
fi
done
OLDBACKUP = s3cmd ls s3://$BUCKETNAME/"oldbackup"
LIMITOLDBACKUP = date --date='14 days ago' +'%Y%m%d'
for dump in $OLDBACKUP
if [${dump} > $LIMITOLDBACKUP]; then
s3cmd del s3://$BUCKETNAME/"oldbackup"/$dump
fi
done
Thanks
First, you are probably going to want to store FOLDERLIST as an array. You can do so like this: FOLDERLIST=($(command)).
Next, you should always store the output of commands which you intend to use like a string like so OUTPUT="$(command)".
So for example your first three lines should look like:
BUCKETNAME="application1"
FOLDERLIST=($(s3cmd ls s3://$BUCKETNAME))
LIMITFOLDER="$(date --date="1 days ago" +"%Y%m%d")"
Now your first for-loop should work.
That's the only thing I can guess is wrong with your script (the second for-loop suffers the same) but you really gave me nothing better to go on.
Your second for-loop, (besides not iterating over a proper array) has no do keyword, so you should do:
for dump in $OLDBACKUP
do
# rest of loop body
done
That could be another issue with your script.
Finally, you're only ever using OLDBACKUP and FOLDERLIST to iterate over. The same can be accomplished just by doing:
for f in $(s3cmd ls s3://$BUCKETNAME)
do
# loop body
done
There's no need to store the output in variables unless you plan to reuse it several times.
As a separate matter though, there's no need to use variable names consisting entirely of the capitalized alphabet. You can use lowercased variable names too so long as you understand that using the names of commands will cause errors.

Retrieving File name for bash/shell programing

I need to access two files in my shell script. The only issue is , I am not sure what the file name is going to be as it is system generated.A part of the file name is always constant , but the rest of it is system generated , hence may vary. I am not sure how to access these files?
Sample File Names
Type 1
MyFile1.yyyy-mm-dd_xx:yy:zz.log
In this case , I know MyFile1 portion is a constant for all the files, the other portion varies based on date and time. I can use date +%Y-%m-%d to get till MyFile1.yyyy-mm-dd_ but I am not sure how to select the correct file. Please note each day will have just one file of the kind. In unix the below command gives me the correct file .
unix> ls MyFile1.yyyy-mm-dd*
Type 2
MyFile2.yyyymmddxxyyxx.RandomText.SomeNumber.txt
In this file , as you can see Myfile2 portion is common,I can user Date +%Y%m%d to get till (current date) MyFile2.yyyymmdd, again not very clear how to go on from there .In unix the below command gives me the correct file .Also I need to have previous date in the dd column for File 2.
unix> ls MyFile2.yyyymmdd*
basically looking for the following line in my shell script
#!/bin/ksh
timeA=$(date +%Y-%m-%d)
timeB=$(date +%Y%m)
sysD=$(date +%d)
sysD=$((sysD-1))
filename1=($Home/folder/MyFile1.$timeA*)
filename2=($Home/folder/MyFile2.$timeB$sysD*)
Just not sure how to get the RHS for these two files.
The result when running the above scripts is as below
Script.ksh[8]: syntax error at line 8 : `(' unexpected
Perhaps this
$ file=(MyFile1.yyyy-mm-dd*)
$ echo $file
MyFile1.yyyy-mm-dd_xx:yy:zz.log
It should be noted that you must declare variables in this manner
foo=123
NOT
foo = 123
Notice carefully, bad
filename1=$($HOME/folder/MyFile1.$timeA*)
good
filename1=($HOME/folder/MyFile1.$timeA*)

Resources