I loaded a find command into autosys. It zips anythin older than 15 days old.
The script only has two lines.
#!/bin/bash
find /casper/dir/usa.* -type f -mtime +15 -exec gzip --verbose {} \;
Problem is that is finished successfully immediately, but continues to run and writes everything to the error file.
casperrd#usa04 1026$ ls -ltr /unity_apps/casperrd/logs/autosys/*CASPER_JOB_AUTOSYS*
-rw-r--r-- 1 capsergrp casper 0 Aug 21 16:35 /unity_apps/casperrd/logs/autosys/CAPSER_JOB_AUTOSYS_20180821_20:35:20.out
-rw-r--r-- 1 capsergrp casper 662 Aug 21 16:43 /unity_apps/casperrd/logs/autosys/CAPSER_JOB_AUTOSYS_20180821_20:35:20.err
casperrd#usa04 1027$
I need to show what files are zipped, and want it to write to the .out file, not .err file
cat /unity_apps/casperrd/logs/autosys/CAPSER_JOB_AUTOSYS_20180821_20:35:20.err
/casper/log/casperjob.20180622.txt: 94.2% -- replaced with /casper/log/casperjob.20180622.txt.gz
/casper/log/casperjob.20180625.csv: 74.6% -- replaced with /casper/log/casperjob.20180625.csv.gz
/casper/log/casperjob.20180625.txt: 94.2% -- replaced with /casper/log/casperjob.20180625.txt.gz
/casper/log/casperjob.20180626.csv:
Related
This question already has answers here:
Listing only directories using ls in Bash? [closed]
(29 answers)
Closed 4 years ago.
I would like to list all directories in a directory. Some of them have spaces in their names. There are also files in the target directory, which I would like to ignore.
Here is the output of ls -lah data/:
drwxr-xr-x 5 me staff 160B 24 Sep 11:30 Wrecsam - Wrexham
-rw-r--r-- 1 me staff 77M 24 Sep 11:31 Wrexham.csv
drwxr-xr-x 5 me staff 160B 24 Sep 11:32 Wychavon
-rw-r--r-- 1 me staff 84M 24 Sep 11:33 Wychavon.csv
I would like to iterate only over the "Wrecsam - Wrexham" and "Wychavon" directories.
This is what I've tried.
for d in "$(find data -maxdepth 1 -type d -print | sort -r)"; do
echo $d
done
But this gives me output like this:
Wychavon
Wrecsam
-
Wrexham
I want output like this:
Wychavon
Wrecsam - Wrexham
What can I do?
Your for loop is not doing the right thing because of word splitting. You can use a glob instead of having to invoke an external command in a subshell:
shopt -s nullglob # make glob expand to nothing if there are no matches
for dir in data/*/; do
echo dir="$dir"
done
Related:
Looping over directories in Bash
Why you shouldn't parse the output of ls(1)
I want to scp the directory and its contents located in /backup folder which will be only 1 day old. I am using the below command which brings all the directories and files from remote to local host which is older than 1 day but i want only folder and its content for 1 day old only.
Command use to scp from remote to local.
sshpass -p "password" scp -rv find "/backup" -mindepth 1 -maxdepth 1 -type d -ctime -1 oracle#192.168.252.38:/backup/ /bkp
The file present is of Jul which should not be picked from the above command instead it should fail.
The thing which i want is if today is 30-Aug-2018 but i want only the data to be scp is 29-Aug-2018.
Files under /back folder:
[oracle# back]$ ls -lrt
total 12
drwxr-xr-x 2 oracle dba 12288 Jul 23 18:32 2018-07-23
Could I please know how I can print contents of the file with same extension (for example, .coords) in multiple directories to a text file using a shell script, confining to specific date and time of the directory created.
Would be thankful to your replies.
EDIT:
ls -ltr
total 16
drwxrwxr-x 2 prakkisr prakkisr 4096 Jul 28 13:23 A
drwxrwxr-x 2 prakkisr prakkisr 4096 Jul 29 09:56 B
drwxrwxr-x 2 prakkisr prakkisr 4096 Jul 31 12:15 C
drwxrwxr-x 2 prakkisr prakkisr 4096 Jul 31 14:34 D
All the folders A,B,C,D have a file which ends with .coords (a.coords in A folder, b.coords in B folder etc..)
Firstly, I want only those folders generated on Jul 31 (i.e C and D folder) to be accessed and want to print the contents of c.coords and d.coords files in those folder into a text file. ( this is according to date)
Secondly, Is it possible to print it according to time. Like suppose, I want only those ".coords" file from the folder (in this case 'D' folder), which are generated after time 14:00 today and get printed into another file. (this is according to date as well as time)
The following command will print the contents of all *.coords files that are in directories with a modification date within the last day:
find . -type d -mtime 0 -exec sh -c 'cat {}/*.coords 2>/dev/null' \;
If you wanted to see the names of the *.coords files rather than their content, then use:
find . -type d -mtime 0 -exec sh -c 'ls {}/*.coords 2>/dev/null' \;
The age of the directory can be specified in many other ways. For example:
To specify the directories age in minutes, use -mmin in place of -mtime.
To specify the directories creation date, rather than its last modification date, use -cmin or -ctime.
If your file system supports it, it is also possible to select directories based on their last access time. Use -amin or -atime.
It is also possible to select directories based some range in times by prepending the age with a + or - sign. To select directories with a creation date more recent than two days, use -ctime -2. By including two such specifiers, you can select from a range of dates.
See man find for full details.
Variation
Suppose that we want to search based on the date of the file, rather than the date of the directory in which the file resides. In this case, a simpler command may be used to print the contents of the matching files:
find . -name '*.coords' -mtime 0 -exec cat {} \;
Suppose that we want to both print the file's name and its contents. Then, we include to actions to the find command:
find . -name '*.coords' -mtime 0 -print -exec cat {} \;
Note the use of quotation marks around *.coords. This assures that the command will work in case that the current directory happens to have .coords file in it.
I have a directory that contains sub-directories and other files and would like to update the date/timestamps recursively with the date/timestamp of another file/directory.
I'm aware that:
touch -r file directory
changes the date/timestamp for the file or directory with the others, but nothing within it. There's also the find version which is:
find . -exec touch -mt 201309300223.25 {} +\;
which would work fine if i could specify the actual file/directory and use anothers date/timestamp. Is there a simple way to do this? even better, is there a way to avoid changing/updating timestamps when doing a 'cp'?
even better, is there a way to avoid changing/updating timestamps when doing a 'cp'?
Yes, use cp with the -p option:
-p
same as --preserve=mode,ownership,timestamps
--preserve
preserve the specified attributes (default:
mode,ownership,timestamps), if possible additional attributes:
context, links, xattr, all
Example
$ ls -ltr
-rwxrwxr-x 1 me me 368 Apr 24 10:50 old_file
$ cp old_file not_maintains <----- does not preserve time
$ cp -p old_file do_maintains <----- does preserve time
$ ls -ltr
total 28
-rwxrwxr-x 1 me me 368 Apr 24 10:50 old_file
-rwxrwxr-x 1 me me 368 Apr 24 10:50 do_maintains <----- does preserve time
-rwxrwxr-x 1 me me 368 Sep 30 11:33 not_maintains <----- does not preserve time
To recursively touch files on a directory based on the symmetric file on another path, you can try something like the following:
find /your/path/ -exec touch -r $(echo {} | sed "s#/your/path#/your/original/path#g") {} \;
It is not working for me, but I guess it is a matter of try/test a little bit more.
In addition to 'cp -p', you can (re)create an old timestamp using 'touch -t'. See the man page of 'touch' for more details.
touch -t 200510071138 old_file.dat
I'm trying to construct a reliable shell script to remove older files based on Xn of days using find. However, the script seems to work intermittently. Is there a better way? I list the files first to make sure I capture them, then use -exec rm{} to delete them.
I execute the script like so:
/home/scripts/rmfiles.sh /u05/backup/export/test dmp 1
#!/usr/bin/ksh
if [ $# != 3 ]; then
echo "Usage: rmfiles.sh <directory> <log|dmp|par> <numberofdays>" 2>&1
exit 1
fi
# Declare variables
HOURDATE=`date '+%Y%m%d%H%M'`;
CLEANDIR=$1;
DELETELOG=/tmp/cleanup.log;
echo "Listing files to remove..." > $DELETELOG 2>&1
/usr/bin/find $CLEANDIR -name "*.$2" -mtime +$3 -exec ls -ltr {} \; > $DELETELOG 2>&1
echo "Removing files --> $HOURDATE" > $DELETELOG 2>&1
#/usr/bin/find $CLEANDIR -name "*.$2" -mtime +$3 -exec rm {} \; > $DELETELOG 2>&1
My sample directory clearly has files older than one day as of today, but find is not picking it up when it was before during some previous testing.
Thu Sep 26 08:54:57 PDT 2013
total 161313630
-rw------- 1 oracle dba 10737418240 Sep 24 14:17 testexp01.dmp
-rw------- 1 oracle dba 10737418240 Sep 24 14:20 testexp02.dmp
-rw------- 1 oracle dba 10737418240 Sep 24 14:30 testexp03.dmp
-rw------- 1 oracle dba 508 Sep 24 15:41 EXPORT-20130924.log
-rw------- 1 oracle dba 509 Sep 25 06:00 EXPORT-20130925.log
-rw------- 1 oracle dba 508 Sep 26 08:30 EXPORT-20130926.log
Apart from a couple of small issues, the script looks good in general. My guess is that you want to add -daystart to the list of options so the base for the -mtime test is measured "from the beginning of today rather than from 24 hours ago. This option only affects tests which appear later on the command line."
If you have GNU find, then try find -D tree,search,stat,rates to see what is going on.
Some comments:
Always quote variables to make sure odd spaces don't have an effect: /usr/bin/find "$CLEANDIR" -name "*.$2" -mtime "+$3" .... Same with CLEANDIR="$1"
Don't terminate lines with ;, it's bad style.
You can replace -exec ls -ltr {} \; with -ls or -print. That way, you don't have to run the find command twice.
You should quote {} since some shells interpret them as special characters.
man find
-mtime mentions the read the comment at -atime
"When find figures out how many 24-hour periods ago the file was last accessed, any fractional part is ignored, so to match -atime +1, a file has to have been accessed at least two days ago." so this is also true for -mtime.