I want to scp the directory and its contents located in /backup folder which will be only 1 day old. I am using the below command which brings all the directories and files from remote to local host which is older than 1 day but i want only folder and its content for 1 day old only.
Command use to scp from remote to local.
sshpass -p "password" scp -rv find "/backup" -mindepth 1 -maxdepth 1 -type d -ctime -1 oracle#192.168.252.38:/backup/ /bkp
The file present is of Jul which should not be picked from the above command instead it should fail.
The thing which i want is if today is 30-Aug-2018 but i want only the data to be scp is 29-Aug-2018.
Files under /back folder:
[oracle# back]$ ls -lrt
total 12
drwxr-xr-x 2 oracle dba 12288 Jul 23 18:32 2018-07-23
Related
I prefer to use bash script,because getfacl,setfacl has big problem(bug).
Before I ask this question,I also search a way of pure bash script to backup and restore owner and permission.It's a pity that no one acknowledged perfect answer.
Then I use a easy way to backup and restore owner and permission:
getfacl -R . >permissions.facl
setfacl --restore=permissions.facl
If I want to exclude .git from .,how to do?
Yes you can.
For example, my directory looks like
[root#967dd7743677 test]# ls -la
total 20
drwxr-xr-x 4 root root 4096 Jan 11 06:37 .
drwxr-xr-x 1 root root 4096 Jan 2 11:08 ..
drwxr-xr-x 8 root root 4096 Jan 2 12:56 .git
drwxr-xr-x 2 root root 4096 Jan 11 06:37 1one
-rw-r--r-- 1 root root 19 Jan 2 12:55 testfile
[root#967dd7743677 test]#
by using find you can exclude any directory you want
find . -type f -not -path '*/\.git/*' -exec getfacl -R {} \;
so via -exec we are calling getfacl -R.
and you can redirect the output
find . -type f -not -path '*/\.git/*' -exec getfacl -R {} \; > permissions.facl
Hope it helps.
I loaded a find command into autosys. It zips anythin older than 15 days old.
The script only has two lines.
#!/bin/bash
find /casper/dir/usa.* -type f -mtime +15 -exec gzip --verbose {} \;
Problem is that is finished successfully immediately, but continues to run and writes everything to the error file.
casperrd#usa04 1026$ ls -ltr /unity_apps/casperrd/logs/autosys/*CASPER_JOB_AUTOSYS*
-rw-r--r-- 1 capsergrp casper 0 Aug 21 16:35 /unity_apps/casperrd/logs/autosys/CAPSER_JOB_AUTOSYS_20180821_20:35:20.out
-rw-r--r-- 1 capsergrp casper 662 Aug 21 16:43 /unity_apps/casperrd/logs/autosys/CAPSER_JOB_AUTOSYS_20180821_20:35:20.err
casperrd#usa04 1027$
I need to show what files are zipped, and want it to write to the .out file, not .err file
cat /unity_apps/casperrd/logs/autosys/CAPSER_JOB_AUTOSYS_20180821_20:35:20.err
/casper/log/casperjob.20180622.txt: 94.2% -- replaced with /casper/log/casperjob.20180622.txt.gz
/casper/log/casperjob.20180625.csv: 74.6% -- replaced with /casper/log/casperjob.20180625.csv.gz
/casper/log/casperjob.20180625.txt: 94.2% -- replaced with /casper/log/casperjob.20180625.txt.gz
/casper/log/casperjob.20180626.csv:
I have a directory of files:
/home/user/files/1.txt
/home/user/files/2.txt
/home/user/files/3.txt
I'd like to zip up the files directory into files.zip so when extracted I get:
files/1.txt
files/2.txt
files/3.txt
I know I can do:
# bash
cd /home/user; zip -r files.zip files/
Is there a way to do this without cding to the user directory?
I know that the --junk-paths flag will store just the filenames and junk the path but I'd like to keep the files directory as a container.
Couldn't find direct way using zip command but you can try "tar" command with -C option.
$ pwd
/home/shenzi
$ ls -l giga/files
total 3
-rw-r--r-- 1 shenzi Domain Users 3 Aug 5 11:24 1.txt
-rw-r--r-- 1 shenzi Domain Users 4 Aug 5 11:25 2.txt
-rw-r--r-- 1 shenzi Domain Users 9 Aug 5 11:25 3.txt
$ tar -C giga -cvf files.zip files/*
files/1.txt
files/2.txt
files/3.txt
$ tar -tvf files.zip
-rw-r--r-- shenzi/Domain Users 3 2014-08-05 11:24 files/1.txt
-rw-r--r-- shenzi/Domain Users 4 2014-08-05 11:25 files/2.txt
-rw-r--r-- shenzi/Domain Users 9 2014-08-05 11:25 files/3.txt
USE: -xvf to extract
Could I please know how I can print contents of the file with same extension (for example, .coords) in multiple directories to a text file using a shell script, confining to specific date and time of the directory created.
Would be thankful to your replies.
EDIT:
ls -ltr
total 16
drwxrwxr-x 2 prakkisr prakkisr 4096 Jul 28 13:23 A
drwxrwxr-x 2 prakkisr prakkisr 4096 Jul 29 09:56 B
drwxrwxr-x 2 prakkisr prakkisr 4096 Jul 31 12:15 C
drwxrwxr-x 2 prakkisr prakkisr 4096 Jul 31 14:34 D
All the folders A,B,C,D have a file which ends with .coords (a.coords in A folder, b.coords in B folder etc..)
Firstly, I want only those folders generated on Jul 31 (i.e C and D folder) to be accessed and want to print the contents of c.coords and d.coords files in those folder into a text file. ( this is according to date)
Secondly, Is it possible to print it according to time. Like suppose, I want only those ".coords" file from the folder (in this case 'D' folder), which are generated after time 14:00 today and get printed into another file. (this is according to date as well as time)
The following command will print the contents of all *.coords files that are in directories with a modification date within the last day:
find . -type d -mtime 0 -exec sh -c 'cat {}/*.coords 2>/dev/null' \;
If you wanted to see the names of the *.coords files rather than their content, then use:
find . -type d -mtime 0 -exec sh -c 'ls {}/*.coords 2>/dev/null' \;
The age of the directory can be specified in many other ways. For example:
To specify the directories age in minutes, use -mmin in place of -mtime.
To specify the directories creation date, rather than its last modification date, use -cmin or -ctime.
If your file system supports it, it is also possible to select directories based on their last access time. Use -amin or -atime.
It is also possible to select directories based some range in times by prepending the age with a + or - sign. To select directories with a creation date more recent than two days, use -ctime -2. By including two such specifiers, you can select from a range of dates.
See man find for full details.
Variation
Suppose that we want to search based on the date of the file, rather than the date of the directory in which the file resides. In this case, a simpler command may be used to print the contents of the matching files:
find . -name '*.coords' -mtime 0 -exec cat {} \;
Suppose that we want to both print the file's name and its contents. Then, we include to actions to the find command:
find . -name '*.coords' -mtime 0 -print -exec cat {} \;
Note the use of quotation marks around *.coords. This assures that the command will work in case that the current directory happens to have .coords file in it.
I'm trying to construct a reliable shell script to remove older files based on Xn of days using find. However, the script seems to work intermittently. Is there a better way? I list the files first to make sure I capture them, then use -exec rm{} to delete them.
I execute the script like so:
/home/scripts/rmfiles.sh /u05/backup/export/test dmp 1
#!/usr/bin/ksh
if [ $# != 3 ]; then
echo "Usage: rmfiles.sh <directory> <log|dmp|par> <numberofdays>" 2>&1
exit 1
fi
# Declare variables
HOURDATE=`date '+%Y%m%d%H%M'`;
CLEANDIR=$1;
DELETELOG=/tmp/cleanup.log;
echo "Listing files to remove..." > $DELETELOG 2>&1
/usr/bin/find $CLEANDIR -name "*.$2" -mtime +$3 -exec ls -ltr {} \; > $DELETELOG 2>&1
echo "Removing files --> $HOURDATE" > $DELETELOG 2>&1
#/usr/bin/find $CLEANDIR -name "*.$2" -mtime +$3 -exec rm {} \; > $DELETELOG 2>&1
My sample directory clearly has files older than one day as of today, but find is not picking it up when it was before during some previous testing.
Thu Sep 26 08:54:57 PDT 2013
total 161313630
-rw------- 1 oracle dba 10737418240 Sep 24 14:17 testexp01.dmp
-rw------- 1 oracle dba 10737418240 Sep 24 14:20 testexp02.dmp
-rw------- 1 oracle dba 10737418240 Sep 24 14:30 testexp03.dmp
-rw------- 1 oracle dba 508 Sep 24 15:41 EXPORT-20130924.log
-rw------- 1 oracle dba 509 Sep 25 06:00 EXPORT-20130925.log
-rw------- 1 oracle dba 508 Sep 26 08:30 EXPORT-20130926.log
Apart from a couple of small issues, the script looks good in general. My guess is that you want to add -daystart to the list of options so the base for the -mtime test is measured "from the beginning of today rather than from 24 hours ago. This option only affects tests which appear later on the command line."
If you have GNU find, then try find -D tree,search,stat,rates to see what is going on.
Some comments:
Always quote variables to make sure odd spaces don't have an effect: /usr/bin/find "$CLEANDIR" -name "*.$2" -mtime "+$3" .... Same with CLEANDIR="$1"
Don't terminate lines with ;, it's bad style.
You can replace -exec ls -ltr {} \; with -ls or -print. That way, you don't have to run the find command twice.
You should quote {} since some shells interpret them as special characters.
man find
-mtime mentions the read the comment at -atime
"When find figures out how many 24-hour periods ago the file was last accessed, any fractional part is ignored, so to match -atime +1, a file has to have been accessed at least two days ago." so this is also true for -mtime.