Getting time and date when file was copied on Windows XP - windows

I have scheduled a batch file on a Windows XP machine to copy a number of text files from a Network Share. The next time this task runs the files are simply overwritten. Batch file goes something like this
copy \\networkshare1\*.txt C:\monitoring\files\
copy \\networkshare2\*.txt C:\monitoring\files\
I then use Perl to analyse the files. What I would like to know is if there is an easy way, without changing the file name, of recording somewhere what time the file was copied from the network share so that my Perl script knows whether it is working with an old or new version of the file.

One way, assuming destination is NTFS:
set dest=C:\monitoring\files\
for %%f in ("\\networkshare1\*.txt") do (
copy "%%f" "%dest%"
echo %TIME% >"%dest%%%~nxf:copywhen"
)
This copies each manually and appends the time to the data stream copywhen, which is permanently associated with the file when its in situ on an NTFS volume.
I'm pretty sure Perl's standard file routines will allow reading this back by simply passing the path as C:\monitoring\files\whatever.txt:copywhen, if not you can capture the output from the command line more <"C:\monitoring\files\whatever.txt:copywhen".

Take a look at the File::stat package. This replaces the internal Perl stat command with a by name interface. However, you could use either the builtin stat command, or the File::stat package.
use File::stat;
use feature qw(say);
my $file_stat = stat($file_name);
say "The following times are displayed as seconds since January 1, 1970"
say " File Last Access time: " . $file_stat->atime;
say " File Last Modification time: " . $file_stat->mtime;
say " File inode Change Time: " . $file_stat->ctime;
One of these should do it. I think your best bet might be mtime.
If you don't want to use File::Stat use the builtin stat command:
say "The following times are displayed as seconds since January 1, 1970"
say " File Last Access time: " . (stat $my_file)[8]
say " File Last Modification time: " . (stat $my_file)[9];
say " File inode Change Time: " . (stat $my_file)[10];
To convert the time into something human readable, use the Time::Piece module.

A simple way is to just delete the file right before you copy it. Since it's being copied from another drive, the time/date stamp should be when it was copied to the drive. That's the way windows has always worked for me. ^_^

Related

Command prompt batch renaming results in syntax error

I need to rename 80k files in multiple folders & subfolders in the same directory. I have been trying to use ren but have been unsuccessful; I get an incorrect syntax error.
My old name looks like this:
c:/users/alice/BiDIR_DOCS_2017_Nov08020423\Company,LLC##NA##7967425.00##7967425.00\Company LLC A and A - Aug2017.pdf BiDIR_DOCS_2017_Nov08020423\Company, LLC##NA##7967425.00##7967425.00\document_# (x.y.z)-test~.pdf
and my new name looks like this:
c:/users/alice/BiDIR_DOCS_2017_Nov08020423\Company,LLC##NA##7967425.00##7967425.00\Company LLC A and A - Aug2017.pdf BiDIR_DOCS_2017_Nov08020423\Company, LLC##NA##7967425.00##7967425.00\system, a old name~ ` to # system b document (xyz)-test.pdf
I have the existing directory print in one column of Excel and in the next column what I want the directory print to be.
I'm not sure if I'm starting my ren command at the right hierarchy of my directory, or if I need quotation marks to keep the spaces and symbols in my new name.
I have tried improvising and testing on my own without success and I cannot find an article online on point.
Try FAR (find and replace) - it a free utility that works well.
http://findandreplace.sourceforge.net/

sql loader without .dat extension

Oracle's sqlldr defaults to a .dat extension. That I want to override. I don't like to rename the file. When googled get to know few answers to use . like data='fileName.' which is not working. Share your ideas, please.
Error message is fileName.dat is not found.
Sqlloder has default extension for all input files data,log,control...
data= .dat
log= .log
control = .ctl
bad =.bad
PARFILE = .par
But you have to pass filename without apostrophe and dot
sqlloder pass/user#db control=control data=data
sqloader will add extension. control.ctl data.dat
Nevertheless i do not understand why you do not want to specify extension?
You can't, at least in Unix/Linux environments. In Windows you can use the trailing period trick, specifying either INFILE 'filename.' in the control file or DATA=filename. on the command line. WIndows file name handling allows that; you can for instance do DIR filename. at a command prompt and it will list the file with no extension (as will DIR filename). But you can't do that with *nix, from a shell prompt or anywhere else.
You said you don't want to copy or rename the file. Temporarily renaming it might be the simplest solution, but as you may have a reason not to do that even briefly you could instead create a hard or soft link to the file which does have an extension, and use that link as the target instead. You could wrap that in a shell script that takes the file name argument:
# set variable from correct positional parameter; if you pass in the control
# file name or other options, this might not be $1 so adjust as needed
# if the tmeproary file won't be int he same directory, need to be full path
filename=$1
# optionally check file exists, is readable, etc. but overkill for demo
# can also check temporary file does not already exist - stop or remove
# create soft link somewhere it won't impact any other processes
ln -s ${filename} /tmp/${filename##*/}.dat
# run SQL*Loader with soft link as target
sqlldr user/password#db control=file.ctl data=/tmp/${filename##*/}.dat
# clean up
rm -f /tmp/${filename##*/}.dat
You can then call that as:
./scriptfile.sh /path/to/filename
If you can create the link in the same directory then you only need to pass the file, but if it's somewhere else - which may be necessary depending on why renaming isn't an option, and desirable either way - then you need to pass the full path of the data file so the link works. (If the temporary file will be int he same filesystem you could use a hard link, and you wouldn't have to pass the full path then either, but it's still cleaner to do so).
As you haven't shown your current command line options you may have to adjust that to take into account anything else you currently specify there rather than in the control file, particularly which positional argument is actually the data file path.
I have the same issue. I get a monthly download of reference data used in medical application and the 485 downloaded files don't have file extensions (#2gb). Unless I can load without file extensions I have to copy the files with .dat and load from there.

Having Issue with file name appended with date -shell scripting

I tried to append the current day and time to the existing file name in shell scripting and I found my command is not working as expected.
For example, if my file name is f1.log and I nees to append it along with current time. This appended version must be used for further processing of the file.
I tried with the following script but getting an error
now=$(date +"%m-%d-%Y/%T")
echo hi >>time.log
mv "time.log" "time.$now.log" (error here : file or directory not found)
echo hello >> time.log$now (have to continue processing with new file)
You cannot have a / character in a filename. The mv command is looking for a directory named with the minute, day, and year of the output of date and trying to create a file named by the time. Just change your format to not include / in the filename.
The problem is with shell's interpertation of / in your date +"%m-%d-%Y/%T".
Change it to a - instead (or something else, as long as it's not / or another meta character that will make the files difficult to work with in the future)

BASH script to copy files based on date, with a catch

Let me explain the tree structure: I have a network directory where several times a day new .txt files are copied by our database. Those files sit on directory based on usernames. On the local disk I have the same structure (directory based on usernames) and need to be updated with the latest .txt files. It's not a sync procedure: I copy the remote file to a local destination and I don't care what happens with it after that, so I don't need to keep it in sync. However I do need to copy ONLY the new files and not those that I already copied. It would look something like:
Remote disk
/mnt/remote/database
+ user1/
+ user2/
+ user3/
+ user4/
Local disk
/var/database
+ user1/
+ user2/
+ user3/
+ user4/
I played with
find /mnt/remote/database/ -type f -mtime +1
and other variants, but it's not working very well.
So, the script i am trying to figure is the following:
1- check /mnt/remote/database recursively for *.txt
2- check the files date to see if they are new (since the last time I checked, maybe maintain a text file with the last time checked on it as a reference?)
3- if the file is new, copy it to the proper destination in /var/database (so /mnt/remote/database/user1/somefile.txt will be copied to /var/database/user1/)
I'll run the script through a cron job.
I'm doing this in C right now, but the IT people are not very good in debugging or writing C and if they need to add or fix something they can handle bash scripts better, which I am not very good at.
Any ideas out there?
thank you!
you could consider using local rsync between the input & output directories. it has all the options you want to make its sync policy very flexible.
find /mnt/remote/database/ -type f -newer $TIMESTAMP_FILE | xargs $CP_COMMAND
touch $TIMESTAMP_FILE
The solution is here:
http://www.movingtofreedom.org/2007/04/15/bash-shell-script-copy-only-files-modifed-after-specified-date/

Sync File Modification Time Across Multiple Directories

I have a computer A with two directory trees. The first directory contains the original mod dates that span back several years. The second directory is a copy of the first with a few additional files. There is a second computer be which contains a directory tree which is the same as the second directory on computer A (new mod times and additional files). How update the files in the two newer directories on both machines so that the mod times on the files are the same as the original? Note that these directory trees are in the order of 10s of gigabytes so the solution would have to include some method of sending only the date information to the second computer.
The answer by Paul is partly correct, rsync is able to do this, however with different parameters. The correct command is
rsync -Prt --size-only original_dir copy_dir
where -P enables partial transfers and displays a progress indicator, -r recurses through subdirectories, -t preserves time stamps and --size-only doesn't transfer files that match in size.
The following command will make sure that TEST2 gets the same date assigned that TEST1 has
touch -t `stat -t '%Y%m%d%H%M.%S' -f '%Sa' TEST1` TEST2
Now instead of using hard-coded values here, you could find the files using "find" utility and then run touch via SSH on the remote machine. However, that means you may have to enter the password for each file, unless you switch SSH to cert authentication. I'd rather not do it all in a super fancy one-liner. Instead let's work with temp files. First go to the directory in question and run a find (you can filter by file type, size, extension, whatever pleases you, see "man find" for details. I'm just filtering by type file here to exclude any directories):
find . -type f -print -exec stat -t '%Y%m%d%H%M.%S' -f '%Sm' "{}" \; > /tmp/original_dates.txt
Now we have a file that looks like this (in my example there are only two entries there):
# cat /tmp/original_dates.txt
./test1
200809241840.55
./test2
200809241849.56
Now just copy the file over to the other machine and place it in the directory (so the relative file paths match) and apply the dates:
cat original_dates.txt | (while read FILE && read DATE; do touch -t $DATE "$FILE"; done)
Will also work with file names containing spaces.
One note: I used the last "modification" date at stat, as that's what you wrote in the question. However, it rather sounds as if you want to use the "creation" date (every file has a creation date, last modification date and last access date), you need to alter the stat call a bit.
'%Sm' - last modification date
'%Sc' - creation date
'%Sa' - last access date
However, touch can only change the modification time and access time, I think it can't change the creation time of a file ... so if that was your real intention, my solution might be sub-optimal... but in that case your question was as well ;-)
I would go through all the files in the source directory tree and gather the modification times from them into a script that I could run on the other directory trees. You will need to be careful about a few 'gotchas'. First, make sure that your output script has relative paths, and make sure you run it from the proper target directory, which should be the root directory of the target tree. Also, when changing machines make sure you are using the same timezone as you were on the machine where you generated the script.
Here's a Perl script I put together that will output the touch commands needed to update the times on the other directory trees. Depending on the target machines, you may need to tweak the date formats or command options, but this should give you a place to start.
#!/usr/bin/perl
my $STARTDIR="$HOME/test";
chdir $STARTDIR;
my #files = `find . -type f`;
chomp #files;
foreach my $file (#files) {
my $mtime = localtime((stat($file))[9]);
print qq(touch -m -d "$mtime" "$file"\n);
}
The other approach you could try is to attach the remote directory using NFS and then copy the times using find and touch -r.
I think rsync (with the right options)
will do this - it claims to only send
file differences, so presumably will
work out that there are no differences
to be transferred.
--times preserves the modification times, which is what you want.
See (for instance)
http://linux.die.net/man/1/rsync
Also add -I, --ignore-times don't skip files that match size and time
so that all files are "transferred' and trust to rsync's file differences optimisation to make it "fairly efficient" - see excerpt from the man page below.
-t, --times
This tells rsync to transfer modification times along with the files and update them on the remote system. Note that if this option is not used, the optimization that excludes files that have not been modified cannot be effective; in other words, a missing -t or -a will cause the next transfer to behave as if it used -I, causing all files to be updated (though the rsync algorithm will make the update fairly efficient if the files haven't actually changed, you're much better off using -t).
I used the following Python scripts instead.
Python scripts run much faster than an approach creating new processes for each file (like using find and stat). The solution below also works in case of timezone differences between systems, as it uses UTC times. It also works with paths containing spaces (but not paths containing newline!). It doesn't set times for symlinks, because the operating system provides no mechanism to modify the timestamp of a symlink, but in a file manager the time of the file the symlink points at is shown instead anyway. It uses a maxTime parameter to avoid resetting dates for files that are actually modified after copying from the original directory.
listMTimes.py:
import os
from datetime import datetime
from pytz import utc
for dirpath, dirnames, filenames in os.walk('./'):
for name in filenames+dirnames:
path = os.path.join(dirpath, name)
# Avoid symlinks because os.path.getmtime and os.utime get and
# set the time of the pointed file, and in the new directory,
# the link may have been redirected.
if not os.path.islink(path):
mtime = datetime.fromtimestamp(os.path.getmtime(path), utc)
print(mtime.isoformat()+" "+path)
setMTimes.py:
import datetime, fileinput, os, sys, time
import dateutil.parser
from pytz import utc
# Based on
# http://stackoverflow.com/questions/6999726/python-getting-millis-since-epoch-from-datetime
def unix_time(dt):
epoch = datetime.datetime.fromtimestamp(0, utc)
delta = dt - epoch
return delta.total_seconds()
if len(sys.argv) != 2:
print('Syntax: '+sys.argv[0]+' <maxTime>')
print(' where <maxTime> an ISO time, e. g. "2013-12-02T23:00+02:00".')
exit(1)
# A file with modification time newer than maxTime is not reset to
# its original modification time.
maxTime = unix_time(dateutil.parser.parse(sys.argv[1]))
for line in fileinput.input([]):
(datetimeString, path) = line.rstrip('\r\n').split(' ', 1)
mtime = dateutil.parser.parse(datetimeString)
if os.path.exists(path) and not os.path.islink(path):
if os.path.getmtime(path) <= maxTime:
os.utime(path, (time.time(), unix_time(mtime)))
Usage: in the first directory (the original) run
python listMTimes.py >/tmp/original_dates.txt
Then in the second directory (a copy of the original, possibly with some files modified/added/deleted) run something like this:
python setMTimes.py 2013-12-02T23:00+02:00 </tmp/original_dates.txt

Resources