I need to access two files in my shell script. The only issue is , I am not sure what the file name is going to be as it is system generated.A part of the file name is always constant , but the rest of it is system generated , hence may vary. I am not sure how to access these files?
Sample File Names
Type 1
MyFile1.yyyy-mm-dd_xx:yy:zz.log
In this case , I know MyFile1 portion is a constant for all the files, the other portion varies based on date and time. I can use date +%Y-%m-%d to get till MyFile1.yyyy-mm-dd_ but I am not sure how to select the correct file. Please note each day will have just one file of the kind. In unix the below command gives me the correct file .
unix> ls MyFile1.yyyy-mm-dd*
Type 2
MyFile2.yyyymmddxxyyxx.RandomText.SomeNumber.txt
In this file , as you can see Myfile2 portion is common,I can user Date +%Y%m%d to get till (current date) MyFile2.yyyymmdd, again not very clear how to go on from there .In unix the below command gives me the correct file .Also I need to have previous date in the dd column for File 2.
unix> ls MyFile2.yyyymmdd*
basically looking for the following line in my shell script
#!/bin/ksh
timeA=$(date +%Y-%m-%d)
timeB=$(date +%Y%m)
sysD=$(date +%d)
sysD=$((sysD-1))
filename1=($Home/folder/MyFile1.$timeA*)
filename2=($Home/folder/MyFile2.$timeB$sysD*)
Just not sure how to get the RHS for these two files.
The result when running the above scripts is as below
Script.ksh[8]: syntax error at line 8 : `(' unexpected
Perhaps this
$ file=(MyFile1.yyyy-mm-dd*)
$ echo $file
MyFile1.yyyy-mm-dd_xx:yy:zz.log
It should be noted that you must declare variables in this manner
foo=123
NOT
foo = 123
Notice carefully, bad
filename1=$($HOME/folder/MyFile1.$timeA*)
good
filename1=($HOME/folder/MyFile1.$timeA*)
Related
I am looping through urls in a file to download their data and each url represents data for each hour of the day. I want to name the file the date and hour that it comes from. The following doesn't work, but I'm not quite sure why
myDate=$(date -v -1d '+%Y/%m/%d')
for hour in {0..23}
do
...
...
#set file name
name=$myDate.$hour.txt
curl -L -o $name "https://..."
done
I think it's just a problem with the syntax for $name in the curl statement, but I don't know how to correct it.
I get the following error
Warning: Failed to create the file 2019/09/18-0: No such file or directory
curl: (23) Failed writing body (0 != 16360)
date -v -1d '+%Y/%m/%d' returns a string containing slashes, which are used as path separators, so, for example, in:
iMac-ForceBru:~ forcebru$ date -v -1d '+%Y/%m/%d'
2019/09/18
The 2019/09/18 would be treated as a file called 18 in directory 09, which is in turn inside directory 2019. It looks like the path 2019/09 doesn't exist on your system, so the file 2019/09/something.txt can't be created.
I'm trying to develop a bash script which filters csv files (generated every hour) for a day before and merge them into a single CSV file. This script seems to do the job for me, except that I'm trying to filter files based on their filenames.
There would be 24 files for each day in the directory, and I need to filter out these files based on their name format:
foofoo_2017052101502.csv
foofoo_2017052104502.csv
foofoo_2017052104503.csv
foofoo_2017052204501.csv
foofoo_2017052204504.csv
Here, I need to filter out for May 21, 2017. So my output CSV files must have the first three .csv files.
What should I add in the script for this filter?
The following script will calculate the previous day yyyymmdd and use that value in the grep to automatically filter out all the file names generated the previous day.
For MacOS
dt=`date -j -v-1d +%Y%m%d`
echo $dt
OutputFiles=`ls | grep foofoo_${dt}`
For Linux
dt=`date -d "yesterday" +%Y%m%d`
echo $dt
OutputFiles=`ls | grep foofoo_${dt}`
These commands when added to the script mentioned will filter the file names for the previous day based upon the current time stamp.
You can let bash do the filtering for you using globbing, for example to list only files with date May 21, 2017 you could use:
for filename in foofoo_20170521*.csv; do...
If you want to be able to call your script with an argument specifying the date to have more flexibility, you can use:
for filename in "foofoo_${1}*.csv"; do...
And then call your script with the date that you want to filter as an argument:
./your_script 20170521
And as #David C. Rankin mention in the comments, a very practical way to do it would be to concatenate all the files from the date you want into one csv that you would then use in your script:
cat foofoo_20170521*.csv > combined_20170521.csv
I have around 500 files that I need to rename with the date the report represents. The filenames are currently:
WUSR1453722998383.csv
WUSR1453723010659.csv
WUSR1453723023497.csv
And so on. The numbers in the filename have nothing to do with the date, so I cannot use the filename as a guide for what the file should be renamed to. The reports start from 02/12/2014 and there is a report for every day of the month up until yesterday (09/04/2016). Luckily as well the filename is sequential - so 04/12/2014 will have a higher number than 03/12/2014 which will have a higher number than 02/12/2014. This means the files are automatically listed in alphabetical order.
There is however a date in the first line of the CSV before the data:
As at Date,2014-12-02
Now I've checked that I have all the files already and I do, so what's the best way to rename there to the date? I can either set the starting date as 02/12/2014 and rename each file as a +1 date or the script can read the date on the first line of the file (As at Date,2014-12-02 for example) and use that date to rename the file.
I have no idea how to write either of the method above in bash, so if you could help out with this, that would be really appreciated.
In terms of file output, I was hoping for:
02-12-2014.csv
03-12-2014.csv
And so on
Is that the answer you need? Assume all the file are under current directory. Do some testings before you do the real operation. The condition is every date string at your cvs file is unique. There will be some files be overwritten otherwise.
#!/bin/bash
for f in *.csv
do
o=$(sed '1q' $f |awk -F"[,-]" '{print $NF"-"$(NF-1)"-"$(NF-2)".csv"}')
# should we backup the file?
# cp $f ${f}.bak
mv $f $o
done
I have a bash script which backups my source code on a 10 minute basis thru crontab. Script was working until the end of August. It's not working since September 1st. This is the script:
#!/bin/sh
date=`date +%e-%m-%y`
cd /home/neky/python
tar -zcf lex.tar.gz lex/
echo $date
mv lex.tar.gz lex-$date.tar.gz
mv lex-$date.tar.gz /home/neky/Dropbox/lex/lex-$date.tar.gz
If I execute it manually, it print the current date 4-09-12, and this error mv: target ‘4-09-12.tar.gz’ is not a directory
What could be the problem?
Your date contains a space when the day of month is a single digit (which also explains why it only stopped working in the new month). That results in your command being split up, i.e.
# this is what it you end up with
mv lex.tar.gz lex- 4-09-12.tar.gz
Use date +%d-%m-%y instead which will give you 04-09-12 (note %d instead of %e).
If you really want a space in the name, you'll need to quote your variables, i.e.:
mv lex.tar.gz "lex-$date.tar.gz"
mv "lex-$date.tar.gz" /home/neky/Dropbox/lex/
The character % (part of your date format) is a special one in cron scripts, so you need to escape it.
I have to ls command to get the details of certain types of files. The file name has a specific format. The first two words followed by the date on which the file was generated
e.g.:
Report_execution_032916.pdf
Report_execution_033016.pdf
Word summary can also come in place of report.
e.g.:
Summary_execution_032916.pdf
Hence in my shell script I put these line of codes
DATE=`date +%m%d%y`
Model=Report
file=`ls ${Model}_execution_*${DATE}_*.pdf`
But the value of Model always gets resolved to 'REPORT' and hence I get:
ls: cannot access REPORT_execution_*032916_*.pdf: No such file or directory
I am stuck at how the resolution of Model is happening here.
I can't reproduce the exact code here. Hence I have changed some variable names. Initially I had used the variable name type instead of Model. But Model is the on which I use in my actual code
You've changed your script to use Model=Report and ${Model} and you've said you have typeset -u Model in your script. The -u option to the typeset command (instead of declare — they're synonyms) means "convert the strings assigned to all upper-case".
-u When the variable is assigned a value, all lower-case characters are converted to upper-case. The lower-case attribute is disabled.
That explains the upper-case REPORT in the variable expansion. You can demonstrate by writing:
Model=Report
echo "Model=[${Model}]"
It would echo Model=[REPORT] because of the typeset -u Model.
Don't use the -u option if you don't want it.
You should probably fix your glob expression too:
file=$(ls ${Model}_execution_*${DATE}*.pdf)
Using $(…) instead of backticks is generally a good idea.
And, as a general point, learn how to Debug a Bash Script and always provide an MCVE (How to create a Minimal, Complete, and Verifiable Example?) so that we can see what your problem is more easily.
Some things to look at:
type is usually a reserved word, though it won't break your script, I suggest you to change that variable name to something else.
You are missing an $ before {DATE}, and you have an extra _ after it. If the date is the last part of the name, then there's no point in having an * at the end either. The file definition should be:
file=`ls ${type}_execution_*${DATE}.pdf`
Try debugging your code by parts: instead of doing an ls, do an echo of each variable, see what comes out, and trace the problem back to its origin.
As #DevSolar pointed out you may have problems parsing the output of ls.
As a workaround
ls | grep `date +%m%d%y` | grep "_execution_" | grep -E 'Report|Summary'
filters the ls output afterwards.
touch 'Summary_execution_032916.pdf'
DATE=`date +%m%d%y`
Model=Summary
file=`ls ${Model}_execution_*${DATE}*.pdf`
worked just fine on
GNU bash, version 4.3.11(1)-release (x86_64-pc-linux-gnu)
Part of question:
But the value of Model always gets resolved to 'REPORT' and hence I get:
This is due to the fact that in your script you have exported Model=Report
Part of question:
ls: cannot access REPORT_execution_*032916_*.pdf: No such file or directory
No such file our directory issue is due to the additional "_" and additional "*"s that you have put in your 3rd line.
Remove it and the error will be gone. Though, Model will still resolve to Report
Original 3rd line :
file=`ls ${Model}_execution_*${DATE}_*.pdf`
Change it to
file=`ls ${Model}_execution_${DATE}.pdf`
Above change will resolve the could not found issue.
Part of question
I am stuck at how the resolution of Model is happening here.
I am not sure what you are trying to achieve, but if you are trying to populate the file parameter with file name with anything_exection_someDate.pdf, then you can write your script as
DATE=`date +%m%d%y`
file=`ls *_execution_${DATE}.pdf`
If you echo the value of file you will get
Report_execution_032916.pdf Summary_execution_032916.pdf
as the answer
There were some other scripts which were invoked before the control reaches the line of codes which I mentioned in the question. In one such script there is a code
typeset -u Model
This sets the value of the variable model always to uppercase which was the reason this error was thrown
ls: cannot access REPORT_execution_032916_.pdf: No such file or directory
I am sorry that
i couldn't provide a minimal,complete and verifiable code