Crontab doesn't executed script but, manually it executed. Mac os X - shell

I have crontab file like this.
#!/bin/sh
PATH=/Users/name/.rvm/gems/ruby-2.6.3#rails-6.0.0.2/bin:/Users/name/.rvm/gems/ruby-2.6.3#global/bin:/Users/name/.rvm/rubies/ruby-2.6.3/bin:/Users/name/bin:/usr/local/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Users/name/.rvm/bin
cd ~/Documents/mydirectory/
bash -c 'ls -1t | tail -n +7 | xargs rm -f'
ls -1t | tail -n +7 | xargs rm -f # this is not working either.
I want to delete files in the directory if number of files is more than 7.
I set to PATH as well since it's a common gotcha.
If I run the script manually it works.
What is the problem?

My Problem was. crsutil
It should be disabled.

Related

Find last created tar.gz and extract it

I need to find last created tar.gz file and extract it to some directory, something like this:
ls -t $(pwd)/Backup_db/ | head -1 | xargs tar xf -C /somedirectory
How to do it the right way in CentOS 7?
You can find out the most recently edited file in a subshell, and then use that in place of a filename. The new directory can be created, and then the tar file can be extracted to it.
new_dir="path/to/new/dir"
mkdir -p $new_dir
tar -zxvf $(ls -t *.tar.gz | head -1) -C $new_dir
Note that ls -t <dir> will not show the full <dir>/<filename> path for the files, but ls -t <dir>/* will, so after also reordering xargs flags (and forcing -n1 for safety), below should work for you:
ls -t $(pwd)/Backup_db/*.tar.gz | head -1 | xargs -n1 tar -C /somedirectory -xf

How to delete all files except the N newest files?

this command allows me to login to a server, to a specific directory from my pc
ssh -t xxx.xxx.xxx.xxx "cd /directory_wanted ; bash"
How can I then do this operation in that directory. I want to be able to basically delete all files except the N most newest.
find ./tmp/ -maxdepth 1 -type f -iname *.tgz | sort -n | head -n -10 | xargs rm -f
This command should work:
ls -t *.tgz | tail -n +11 | xargs rm -f
Warning: Before doing rm -f, confirm that the files being listed by ls -t *.tgz | tail -n +11 are as expected.
How it works:
ls lists the contents of the directory.-t flag sorts by
modification time (newest first). See the man page of ls
tail -n +11 outputs starting from line 11. Please refer the man page of
tail for more
detials.
If the system is a Mac OS X then you can delete based on creation time too. Use ls with -Ut flag. This will sort the contents based on the creation time.
You can use this command,
ssh -t xxx.xxx.xxx.xxx "cd /directory_wanted; ls -t *.tgz | tail -n
+11 | xargs rm -f; bash"
In side quotes, we can add what ever the operations to be performed in remote machine. But every command should be terminated with semicolon (;)
Note: Included the same command suggested by silentMonk. It is simple and it is working. But verify it once before performing the operation.

using "wc -l" on script counts more than using on terminal

I'm making a bash script and it's like this:
#!/bin/bash
DNUM=$(ls -lAR / 2> /dev/null | grep '^d' | wc -l)
echo there are $DNUM directories.
the problem is, that when I run this line directly on the terminal:
ls -lAR / 2> /dev/null | grep '^d' | wc -l
I get a number.
But when I run the script it displays me a greater number, like 30 to 50 more.
What is the problem here?
Why is the "wc" command counting more lines when running it from a script?
You may have different directory roots for the two runs. Instead of ls to find the directories only you can use this
find parent_directory -type d
and pipe to wc -l to count.
The /proc directory will have processes and treated as directories and will change from run to run. To exclude it from the count use
find / -path /proc -prune -o -type d | wc -l
To find the differences in your exact case I would suggest to run
#!/bin/bash
for r in 1 2; do
ls -lAR / 2> /dev/null | grep '^d' > run${r}.txt 1> out${r}.txt
done
diff -Nura out1.txt out2.txt
rm -f out1.txt out2.txt
But as the most ppl. already said it would make sense to exclude directories like sys,proc ...

how to pipe commands in ubuntu

How do I pipe commands and their results in Ubuntu when writing them in the terminal. I would write the following commands in sequence -
$ ls | grep ab
abc.pdf
cde.pdf
$ cp abc.pdf cde.pdf files/
I would like to pipe the results of the first command into the second command, and write them all in the same line. How do I do that ?
something like
$ cp "ls | grep ab" files/
(the above is a contrived example and can be written as cp *.pdf files/)
Use the following:
cp `ls | grep ab` files/
Well, since the xargs person gave up, I'll offer my xargs solution:
ls | grep ab | xargs echo | while read f; do cp $f files/; done
Of course, this solution suffers from an obvious flaw: files with spaces in them will cause chaos.
An xargs solution without this flaw? Hmm...
ls | grep ab | xargs '-d\n' bash -c 'docp() { cp "$#" files/; }; docp "$#"'
Seems a bit klunky, but it works. Unless you have files with returns in them I mean. However, anyone who does that deserves what they get. Even that is solvable:
find . -mindepth 1 -maxdepth 1 -name '*ab*' -print0 | xargs -0 bash -c 'docp() { cp "$#" files/; }; docp "$#"'
To use xargs, you need to ensure that the filename arguments are the last arguments passed to the cp command. You can accomplish this with the -t option to cp to specify the target directory:
ls | grep ab | xargs cp -t files/
Of course, even though this is a contrived example, you should not parse the output of ls.

How do I write a shell script to remove the unzipped files in a wrong directory?

I accidentally unzipped files into a wrong directory, actually there are hundreds of files... now the directory is messed up with the original files and the wrongly unzip files. I want to pick the unzipped files and remove them using shell script, e.g.
$unzip foo.zip -d test_dir
$cd target_dir
$ls test_dir | rm -rf
nothing happened, no files were deleted, what's wrong with my command ? Thanks !
The following script has two main benefits over the other answers thus far:
It does not require you to unzip a whole 2nd copy to a temp dir (I just list the file names)
It works on files that may contain spaces (parsing ls will break on spaces)
while read -r _ _ _ file; do
arr+=("$file")
done < <(unzip -qql foo.zip)
rm -f "${arr[#]}"
Right way to do this is with xargs:
$find ./test_dir -print | xargs rm -rf
Edited Thanks SiegeX to explain to me OP question.
This 'read' wrong files from test dir and remove its from target dir.
$unzip foo.zip -d /path_to/test_dir
$cd target_dir
(cd /path_to/test_dir ; find ./ -type f -print0 ) | xargs -0 rm
I use find -0 because filenames can contain blanks and newlines. But if not is your case, you can run with ls:
$unzip foo.zip -d /path_to/test_dir
$cd target_dir
(cd /path_to/test_dir ; ls ) | xargs rm -rf
before to execute you should test script changing rm by echo
Try
for file in $( unzip -qql FILE.zip | awk '{ print $4 }'); do
rm -rf DIR/YOU/MESSED/UP/$file
done
unzip -l list the content with a bunch of information about the zipped files. You just have to grep the file name out of it.
EDIT: using -qql as suggested by SiegeX
The following worked for me (bash)
unzip -l filename.zip | awk '{print $NF}' | xargs rm -Rf
Do this:
$ ls test_dir | xargs rm -rf
You need ls test_dir | xargs rm -rf as your last command
Why:
rm doesn't take input from stdin so you can't pipe the list of files to it. xargs takes the output of ls command and presents it to rm as input so that it can delete it.
Compacting the previous one. Run this command in the /DIR/YOU/MESSED/UP
unzip -qql FILE.zip | awk '{print "rm -rf " $4 }' | sh
enjoy

Resources