I wrote a shell script that downloads a document.
cd Desktop/Reports/folder/
wget http://www.mywebpage.com/data/reports/2012_04_02_data.xls
echo "data downloaded."
The tricky part is that the document I want to load (2012_04_02_data.xls) is dated for the previous day. So I need to download the previous days data, not the current day.
My goal is to run the shell script without having to increment the calendar day each morning. Instead, the shell will take the current day and subtract one calendar day and then load that date within the url.
How about:
wget "http://www.mywebpage.com/data/reports/$(date -d yesterday +%Y_%m_%d)_data.xls"
This ought to work for GNU coreutils, which based on the linux tag I'm guessing you have.
You can use the date command
$ wget "http://www.mywebpage.com/data/reports/$(date -d yesterday +'%Y_%m_%d')_data.xls"
The more tricky part is, if you have to handle off days. Usually I wget the link containing page and then grep the date out of the href attribute. then pass the string to your "real" wget command.
You tagged this with perl, so here's how I'd do it with Perl. That doesn't mean I think you should use Perl though:
use v5.10.1;
use DateTime;
use Mojo::UserAgent;
chdir 'Desktop/Reports/folder' or die "Could not chdir: $!";
my $date = DateTime->now->subtract( days => 1 )->ymd( '_' );
my $url = sprintf
"http://www.mywebpage.com/data/reports/%s_data.xls",
$date;
my $data = Mojo::UserAgent->new->get( $url )->res->body;
You can do more fancy stuff with the user agent to save chunks to a file as it comes in and many other things you might want to do, but you'll have to fill in those bits yourself.
Related
I have a script that i scheduled for weekdays with cron. The script uses wget to download a file every business day. The url is dynamic based on the date. I created the following script:
day_or_weekend=`date +%w`
if [$day_or_weekend == 1] ; then
look_back=3
else
look_back=1
fi
wget -O file_name_`date +%Y%m%d`.csv https://website/filename/date/`date $look_back days ago + %Y-%m-%d`/type/csv
This script generates the file with the correct name but the contents are empty. I'm quite new to writing bash shell scripts so im not sure how to go about debugging this. So my questions are:
Am I defining the look_back variable correctly?
Am I correctly adding the date parameter to the wget url?
This should solve your blank issues. Command lines are very particular about spacing and quoting.
wget -O "file_name_$(date +%Y%m%d).csv" \
"https://website/filename/date/$(date -d "$look_back days ago" +%Y-%m-%d)/type/csv"
Your logic to determine the previous work day will function as designed so long as you use busybox or GNU date (BSD date must subtract time in seconds from the epoch representation, basic Posix date can't actually do this).
I need to find today's date and then subtract a year and format that date into the YYYY-MM-dd format.
I am able to accomplish this with a script I wrote, but apparently it is only compatible with bash. I need it to be compatible with AIX.
lastYear=`date +'%Y-%m-%d' -d 'last year'`
searchDate="$lastYear 00.00.00";
echo "Retrieving data start from $searchDate"
myquery="myquery >= '$searchDate'"
The result when run on an AIX machine is that it only passes the "00:00:00" part of the $searchDate, the date does not prefix before the time as I hoped. What is the safest way to write this for the most compatibility across Linux/Unix variations?
Thank you!
Why make it so complicated?
#!/bin/ksh
typeset -i year=$( date +%Y )
(( year -= 1 ))
typeset rest=$( date +%m-%d )
echo "${year}-${rest}"
This should work in any shell. If you use sh replace the
$( ... )
with back tics
` ... `
But for bash and ksh I use $( ... ) -- just personal preference.
Checkout Updated Section Below
Original Answer
Try this. It uses the -v flag to display the result of adjusting the current date by negative one year -1y
searchDate=$(date -v-1y +'%Y-%m-%d')
echo "Retrieving data start from $searchDate"
myquery="myquery >= '$searchDate'"
Here is the output:
Retrieving data start from 2017-06-21
Note: I did try to run the lines you provided above in a script file with a bash shebang, but ran into an illegal time format error and the output was 00:00:00. The script I provided above runs cleanly on my unix system.
Hope this helps. Good luck!
Updated Section
Visit ShellCheck.net
It's a nice resource for testing code compatibility between sh, bash, dash, and ksh(AIX's default) shells.
Identifying the Actual Issue
When I entered your code, it clearly identified syntax that is considered legacy and suggested how to fix it, and gave this link with an example and a great explanation. Here's the output:
lastYear=`date +'%Y-%m-%d' -d 'last year'`
^__Use $(..) instead of legacy `..`.
So, it looks like you might be able to use the code you wrote, by making one small change:
lastYear=$(date +'%Y-%m-%d' -d 'last year')
Note: This question already has an accepted answer, but I wanted to share this resource, because it may help others trouble-shoot shell compatibility issues like this in the future.
I am trying to get the last time the date a file was modified. I used a variable for date and a variable for time.
This will get the date and time but I want to use -r using the date command to make a reflection of when the date was last modified. Just not sure how I go about using it in my variables.
How would I go about doing this?
Here are my variables:
DATE="$(date +'%m/%d/%Y')"
TIME="$(date +'%H:%M')"
I tried putting the -r after and before the time and date.
Though people might tell you, you should not parse the output of ls, simply that can easily break if your file name contains tabs, spaces, line breaks, your user decides to simply specify a different set of ls options, the ls version you find is not behaving like you expected...
Use stat instead:
stat -c '%Y'
will give you the seconds since epoch.
Try
date -d "#$(stat -c '%Y' $myfile)" "+%m/%d/%Y"
to get the date, and read through man date to get the time in the format you want to, replacing '%F' in the command line above:
date -d "#$(stat -c '%Y' $myfile)" "+%H:%M"
EDIT: used your formats.
EDIT2: I really don't think your date format is wise, because it's just so ambiguous for anyone not from the US, and also it's not easily sortable. But it's a cultural thing, so this is more of a hint: If you want to make your usable for people from abroad, either use Year-month-day as format, or get the current locale's setting to format dates.
I think you are looking for
ls -lt myfile.txt
Here in 6th column you will see when file was modified.
Or you could use stat myfile.txt to check the modified time of a file.
I know this is a very old question, but, for the sake of completeness, I'm including an additional answer here.
The original question does not specify the specific operating system. stat differs significantly from SysV-inspired Unixes (e.g. Linux) and BSD-inspired ones (e.g. Free/Open/NetBSD, macOS/Darwin).
Under macOS Big Sur (11.5), you can get the date of a file with a single stat command:
stat -t '%m/%d/%Y %H:%M' -f "%Sm" myfile.txt
will output
04/10/2021 23:22
for April 10, 2021.
You can easily put that in two commands, one for the date, another for the time, of course, to comply with the original question as formulated.
Use GNU stat.
mtime=$(stat --format=%y filename)
Previously I was using uuidgen to create unique filenames that I then need to iterate over by date/time via a bash script. I've since found that simply looping over said files via 'ls -l' will not suffice because evidently I can only trust the OS to keep timestamp resolution in seconds (nonoseconds is all zero when viewing files via stat on this particular filesystem and kernel)
So I then though maybe I could just use something like date +%s%N for my filename. This will print the seconds since 1970 followed by the current nanoseconds.
I'm possibly over-engineering this at this point, but these are files generated on high-usage enterprise systems so I don't really want to simply trust the nanosecond timestamp on the (admittedly very small) chance two files are generated in the same nanosecond and we get a collision.
I believe the uuidgen script has logic baked in to handle this occurrence so it's still guaranteed to be unique in that case (correct me if I'm wrong there... I read that someplace I think but the googles are failing me right now).
So... I'm considering something like
FILENAME=`date +%s`-`uuidgen -t`
echo $FILENAME
to ensure I create a unique filename that can then be iterated over with a simple 'ls' and who's name can be trusted to both be unique and sequential by time.
Any better ideas or flaws with this direction?
If you order your date format by year, month (zero padded), day (zero padded), hour (zero padded), minute (zero padded), then you can sort by time easily:
FILENAME=`date '+%Y-%m-%d-%H-%M'`-`uuidgen -t`
echo $FILENAME
or
FILENAME=`date '+%Y-%m-%d-%H-%M'`-`uuidgen -t | head -c 5`
echo $FILENAME
Which would give you:
2015-02-23-08-37-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
or
2015-02-23-08-37-xxxxx
# the same as above, but shorter unique string
You can choose other delimiters for the date/time besides - as you wish, as long as they're within the valid characters for Linux file name.
You will need %N for precision (nanoseconds):
filename=$(date +%s.%N)_$(uuidgen -t); echo $filename
1424699882.086602550_fb575f02-bb63-11e4-ac75-8ca982a9f0aa
BTW if you use %N and you're not using multiple threads, it should be unique enough.
You could take what TIAGO said about %N precision, and combine it with taskset
You can find some info here: http://manpages.ubuntu.com/manpages/hardy/man1/taskset.1.html
and then run your script
taskset --cpu-list 1 my_script
Never tested this, but, it should run your script only on the first core of your CPU. I'm thinking that if your script runs on your first CPU core, combined with date %N (nanoseconds) + uuidgen there's no way you can get duplicate filenames.
I have a bash script which backups my source code on a 10 minute basis thru crontab. Script was working until the end of August. It's not working since September 1st. This is the script:
#!/bin/sh
date=`date +%e-%m-%y`
cd /home/neky/python
tar -zcf lex.tar.gz lex/
echo $date
mv lex.tar.gz lex-$date.tar.gz
mv lex-$date.tar.gz /home/neky/Dropbox/lex/lex-$date.tar.gz
If I execute it manually, it print the current date 4-09-12, and this error mv: target ‘4-09-12.tar.gz’ is not a directory
What could be the problem?
Your date contains a space when the day of month is a single digit (which also explains why it only stopped working in the new month). That results in your command being split up, i.e.
# this is what it you end up with
mv lex.tar.gz lex- 4-09-12.tar.gz
Use date +%d-%m-%y instead which will give you 04-09-12 (note %d instead of %e).
If you really want a space in the name, you'll need to quote your variables, i.e.:
mv lex.tar.gz "lex-$date.tar.gz"
mv "lex-$date.tar.gz" /home/neky/Dropbox/lex/
The character % (part of your date format) is a special one in cron scripts, so you need to escape it.