Having Issue with file name appended with date -shell scripting - bash

I tried to append the current day and time to the existing file name in shell scripting and I found my command is not working as expected.
For example, if my file name is f1.log and I nees to append it along with current time. This appended version must be used for further processing of the file.
I tried with the following script but getting an error
now=$(date +"%m-%d-%Y/%T")
echo hi >>time.log
mv "time.log" "time.$now.log" (error here : file or directory not found)
echo hello >> time.log$now (have to continue processing with new file)

You cannot have a / character in a filename. The mv command is looking for a directory named with the minute, day, and year of the output of date and trying to create a file named by the time. Just change your format to not include / in the filename.

The problem is with shell's interpertation of / in your date +"%m-%d-%Y/%T".
Change it to a - instead (or something else, as long as it's not / or another meta character that will make the files difficult to work with in the future)

Related

Need to update the csv file with the timestamp of the files from another location

I have a csv file score.csv with at path /NAS/DQ with 2 columns Scorename,filename.
scorename,filename
ABC,cust.txt
XYZ,bank.txt
These filescust.txt and bank.txt are placed at /NAS/files_path. There will be unique instance of each file placed at this path everyday.
I want to append the file timestamp from /NAS/files_path to /NAS/DQ csv file.
So the timestamp should be updated everytime to the csv file at /NAS/DQ location.
I am new to unix and currently looking for ways to do it.
Any help is appreciated!!
Sed will be a good candidate for this:
sed -ri '2,$s/(^.*$)/\1 '$(date)'/' filename
Substitute the existing line for the existing line plus a space and the date. The format of the date can be amended as required with +%.. We don't want to format the first line, so run the amendments from lines 2 to the last line ($)

Search the File Pattern from File Name

I have a file Patter_File.txt which stores lines like below -
ABC|ABC_[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9].dat|8|,|70|NAME
ABC|ABC_[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9].dat|9|,|70|PLACE
XYZ|XYZ_[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9].dat|23|,|70|SSN
XYZ|XYZ_[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9].dat|33|,|70|DOB
MNO|MNO_SUMMIT.dat|40|,|70|ADDRESS
MNO|MNO_SUMMIT.dat|5|,|70|COUNTRY
So this PATTERN_FILE.txt stores some information of the actual file but file name is stored in the pattern(if file name has date in the name) except the actual name.
My requirement is a command in which I should pass the actual file name like "ABC_20200408.dat" and it should return all the related lines from this file. Can someone please help.
below command is working fine but in this case I have to pass each pattern one by one to check which one is working.
echo "ABC_20200408.dat"|grep ABC_[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9].dat

Shell script - replace a string in the all the files in a directory based on file name

I want to replace a constant string in multiple files based on the name of the file.
Example:
In a directory I have many files named like 'X-A01', 'X-B01', 'X-C01'.
In each file there is a string 'SS-S01'.
I want to replace string 'SS-S01' in the first file with 'X-A01', second file with 'X-B01' and third file with 'X-C01'.
Please help me how can we do it as I have hundreds of files like this and do not want to manually edit all files one by one.
Remember to back up your files(!) before running this command, since I have not actually tried it myself:
You could do something like:
for file in <DIR>/*; do sed -i "s/SS-S01/${file##*/}/" "$file"; done
This will loop over each file in <DIR> and for each loop iteration assign the file name to $file. For each file, sed will replace the first occurence of SS-S01 in that file by the file name.

sql loader without .dat extension

Oracle's sqlldr defaults to a .dat extension. That I want to override. I don't like to rename the file. When googled get to know few answers to use . like data='fileName.' which is not working. Share your ideas, please.
Error message is fileName.dat is not found.
Sqlloder has default extension for all input files data,log,control...
data= .dat
log= .log
control = .ctl
bad =.bad
PARFILE = .par
But you have to pass filename without apostrophe and dot
sqlloder pass/user#db control=control data=data
sqloader will add extension. control.ctl data.dat
Nevertheless i do not understand why you do not want to specify extension?
You can't, at least in Unix/Linux environments. In Windows you can use the trailing period trick, specifying either INFILE 'filename.' in the control file or DATA=filename. on the command line. WIndows file name handling allows that; you can for instance do DIR filename. at a command prompt and it will list the file with no extension (as will DIR filename). But you can't do that with *nix, from a shell prompt or anywhere else.
You said you don't want to copy or rename the file. Temporarily renaming it might be the simplest solution, but as you may have a reason not to do that even briefly you could instead create a hard or soft link to the file which does have an extension, and use that link as the target instead. You could wrap that in a shell script that takes the file name argument:
# set variable from correct positional parameter; if you pass in the control
# file name or other options, this might not be $1 so adjust as needed
# if the tmeproary file won't be int he same directory, need to be full path
filename=$1
# optionally check file exists, is readable, etc. but overkill for demo
# can also check temporary file does not already exist - stop or remove
# create soft link somewhere it won't impact any other processes
ln -s ${filename} /tmp/${filename##*/}.dat
# run SQL*Loader with soft link as target
sqlldr user/password#db control=file.ctl data=/tmp/${filename##*/}.dat
# clean up
rm -f /tmp/${filename##*/}.dat
You can then call that as:
./scriptfile.sh /path/to/filename
If you can create the link in the same directory then you only need to pass the file, but if it's somewhere else - which may be necessary depending on why renaming isn't an option, and desirable either way - then you need to pass the full path of the data file so the link works. (If the temporary file will be int he same filesystem you could use a hard link, and you wouldn't have to pass the full path then either, but it's still cleaner to do so).
As you haven't shown your current command line options you may have to adjust that to take into account anything else you currently specify there rather than in the control file, particularly which positional argument is actually the data file path.
I have the same issue. I get a monthly download of reference data used in medical application and the 485 downloaded files don't have file extensions (#2gb). Unless I can load without file extensions I have to copy the files with .dat and load from there.

Getting time and date when file was copied on Windows XP

I have scheduled a batch file on a Windows XP machine to copy a number of text files from a Network Share. The next time this task runs the files are simply overwritten. Batch file goes something like this
copy \\networkshare1\*.txt C:\monitoring\files\
copy \\networkshare2\*.txt C:\monitoring\files\
I then use Perl to analyse the files. What I would like to know is if there is an easy way, without changing the file name, of recording somewhere what time the file was copied from the network share so that my Perl script knows whether it is working with an old or new version of the file.
One way, assuming destination is NTFS:
set dest=C:\monitoring\files\
for %%f in ("\\networkshare1\*.txt") do (
copy "%%f" "%dest%"
echo %TIME% >"%dest%%%~nxf:copywhen"
)
This copies each manually and appends the time to the data stream copywhen, which is permanently associated with the file when its in situ on an NTFS volume.
I'm pretty sure Perl's standard file routines will allow reading this back by simply passing the path as C:\monitoring\files\whatever.txt:copywhen, if not you can capture the output from the command line more <"C:\monitoring\files\whatever.txt:copywhen".
Take a look at the File::stat package. This replaces the internal Perl stat command with a by name interface. However, you could use either the builtin stat command, or the File::stat package.
use File::stat;
use feature qw(say);
my $file_stat = stat($file_name);
say "The following times are displayed as seconds since January 1, 1970"
say " File Last Access time: " . $file_stat->atime;
say " File Last Modification time: " . $file_stat->mtime;
say " File inode Change Time: " . $file_stat->ctime;
One of these should do it. I think your best bet might be mtime.
If you don't want to use File::Stat use the builtin stat command:
say "The following times are displayed as seconds since January 1, 1970"
say " File Last Access time: " . (stat $my_file)[8]
say " File Last Modification time: " . (stat $my_file)[9];
say " File inode Change Time: " . (stat $my_file)[10];
To convert the time into something human readable, use the Time::Piece module.
A simple way is to just delete the file right before you copy it. Since it's being copied from another drive, the time/date stamp should be when it was copied to the drive. That's the way windows has always worked for me. ^_^

Resources