lftp: how to recursively set permissions; firstly by directory than by file - ftp

When securing a Drupal or WordPress installation on a shared host that does not expose SSH access (a lousy situation, fwiw) lftp seems like the right approach to batch setting permissions for directories and files. The find command boasts that you can redirect its output, so one should be able to run a find, grep exclude to only match lines ending in "/" meaning a directory, and then set the permissions on such matches to 755 and perform the inverse on file matches and set to 644 and then fine tune specific files, such as settings.php and so forth.
lftp prompt> find . | grep "/$" | xargs chmod -v 755
Isn't working and I'm sure I have failed to chain these commands in the correct sequence and format.
How to get this to work?
Update: by "isn't working" I mean that the above command produces no output to the console, nor to the lftp error log. It isn't running these commands locally, fwiw. I'll reduce the command as a demonstration:
find . | grep "/$"
Will take the output of "find" and return matches, here, directories, by nature of the string match:
./daily/
./ffmpeg-installer/
./hourly/
./includes/
./includes/database/
./includes/database/mysql/
./and_so_forth_on_down
Which is cool, since I wish to perform a chmod (and internal command for lftp, with support varying by ftp server) So I expand the command like this:
find . | grep "/$" | xargs echo
Which outputs — nothing. No error output, either. The pipe from grep to xargs isn't happening.
My goal is to form the equivalent of:
chmod 755 ./daily/
chmod 755 ./ffmpeg-installer/
In lftp, the chmod command is performing an ftp-server-permissions change, not a local perms change.

For an explanation of why this does not work as expected, read on - for a solution to the given problem, scroll down.
The answer can be found in the manpage for lftp, which states that
"[s]ome commands allow redirecting their output (cat, ls, ...) to file or via pipe to external command."
So, when you are using a pipe like this on a command that does support redirection in lftp, you are piping its output to your local tools, which will eventually result in chmod trying to change the permissions for a file/directory on our local machine, and most likely fail in case you don't coincidally have the same directory layout in your current directory locally - which is probably the problem you encountered.
The grep + xargs pipe does work, I just tested the following:
lftp> find -d 2 | grep "/$"
./
./applications/
./lost+found/
./netinfo/
./packages/
./security/
./systems/
lftp> find -d 2 | grep "/$" | xargs echo
./ ./applications/ ./lost+found/ ./netinfo/ ./packages/ ./security/ ./systems/
My wild guess is that it did not appear to work for you because you did not specify a max-depth to find and the network connection + buffering in the pipe got in the way. When I try the same on a directory containing many files/subfolders it takes really long to finish and print. Did the command actually finish for you without output?
But still, what you are trying to do is not possible. As I stated, the right-hand-side of the pipe works with external commands (even if an inbuilt of the same name exists) as explained by the manual, so
lftp> chmod 644 foobar
and
lftp> echo "foobar" | xargs chmod 644
are not equivalent.
Yes, chmod is an inbuilt but used in a pipe in the client it will not execute the inbuilt - the manpage clearly states this and you can easily test this yourself. Try the following commands and check their output:
lftp> echo foo | uname -a
lftp> echo foo | ls -al
lftp> echo foo | chmod --help
lftp> chmod --help
Solution
As far as a solution to your problem is concerned, you can try something along the lines of:
#!/bin/bash
server="ftp.foo.bar"
root_folder="/my/path"
{
{
lftp "${server}" <<EOF
cd "${root_folder}"
find | grep "/$"
quit
EOF
} | awk '{ printf "chmod 755 \"%s\"\n", $0 }'
{
lftp "${server}" <<EOF
cd "${root_folder}"
find | grep -v "/$"
quit
EOF
} | awk '{ printf "chmod 644 \"%s\"\n", $0 }'
} | lftp "${server}"
This logs in to your server, cds to the folder where you want to recursively start changing the permissions, uses find + grep to find all directories, logs out, pipes this file list into awk to build chmod commands around it, repeats the whole process for files and then pipes the whole list of commands into a new lftp invocation to actually run the generated chmod commands.
You will also have to add your credentials to the lftp invocations and you might want to comment out the final | lftp "${server}" to check if it produces the desired output before you actually run the whole thing. Please report back if this works for you!

Related

Bash command to copy a log file to another directory as soon as specified expression is found in it

I've got a log file that is rotated automatically when it reaches a certain size. The system keeps 5 rotated logs at a time, the older ones are deleted, and the lifetime of a log file is about 20 minutes.
The task is to monitor the log file (system.log) for a specified error code and when it occurs – to copy the file into another directory, before it is deleted.
I tried this:
tail -F system.log | grep -l "error code" | xargs cp /another/directory
but it returns "cp: taget 'input)' is not a directory"
Apparently this is because grep command does not return the file name as soon as the error code is found in it as I expected.
So I need some help here please.
The normal order of arguments to cp is
cp source destination
xargs puts its arguments at the end of the command, so you're executing the command
cp /another/directory input
which has to arguments backwards.
To solve this, use the -t option to cp to specify the destination explicitly.
xargs cp -t /another/directory
I tried this tail -F system.log | grep -l "error code" | xargs -i cp {} /another/directory and it returned 'cp: cannot stat '(standard input)': no such file or directory' It seems that something is wrong with the part tail -F system.log | grep -l "error code" as it returns (standard input) instead of the name of the file
Oops. I can't believe I didn't see that before...
$: echo foo | grep -l foo
(standard input)
tail is sending the file to grep, so grep's file IS stdin, so that's what it's listing.
edit
Are you using logrotate?
Check out the manual and look carefully at the prerotate/postrotate/firstaction/lastaction options.
For an example, Option 7 here -
have your script scan the just-rotated log, and if it has the trigger string in it, copy it somewhere.

One-liner to check whether a file exists, then feed it to xargs

I have a one-liner that spits out all of the files modified in my current feature branch, which is branched off of a shared, upstream development branch. I then hope to feed the files that exist to the phpcs linter via xargs -- something like this:
git diff --name-only shared-upstream-development-branch | grep "\.php$" | xargs test -f {} && echo {} | xargs vendor/bin/phpcs
However, when I run this, I get something like the following:
test: extra argument
‘path/to/my/file.php’
I feel like I'm close to having a working solution.
How can I modify the one-liner above to correctly see if each PHP file still exists, then feed it onward to phpcs?
I know that everything up through the output of the grep command works well, as removing the two parts of the one-liner that refer to xargs gives me a nice list of file names.
(I also tried using --diff-filter=d to filter out deleted files, but this does not seem to work with my version of git, as I still get a complaint from phpcs about how a file "does not exist.")
&& separates commands, and is not an argument to xargs; you need to execute an explicit shell to use &&.
xargs sh -c 'test -f "$1" && echo "$1"' _ {}

Why is this bash script not changing path?

I wrote a basic script which changes the directory to a specific path and shows the list of folders, but my script shows the list of files of the current folder where my script lies instead of which I specify in script.
Here is my script:
#!/bin/bash
v1="$(ls -l | awk '/^-/{ print $NF }' | rev | cut -d "_" -f2 | rev)"
v2=/home/PS212-28695/logs/
cd $v2 && echo $v1
Does any one knows what I am doing wrong?
Your current script makes no sense really. v1 variable is NOT a command to execute as you expect, but due to $() syntax it is in fact output of ls -t at the moment of assignment and that's why you have files from current directory there as this is your working directory at that particular moment. So you should rather be doing ordinary
ls -t /home/PS212-28695/logs/
EDIT
it runs but what if i need to store the ls -t output to variable
Then this is same syntax you already had, but with proper arguments:
v1=$(ls -t /home/PS212-28695/logs/)
echo ${v1}
If for any reason you want to cd then you have to do that prior setting v1 for the same reason I explained above.

Can I use a variable in a file path in bash? If so, how?

I'm trying to write a small shell script to find the most recently-added file in a directory and then move that file elsewhere. If I use:
ls -t ~/directory | head -1
and then store this in the variable VARIABLE_NAME, why can't I then then move this to ~/otherdirectory via:
mv ~/directory/$VARIABLE_NAME ~/otherdirectory
I've searched around here and Googled, but there doesn't seem to be any information on using variables in file paths? Is there a better way to do this?
Edit: Here's the portion of the script:
ls -t ~/downloads | head -1
read diags
mv ~/downloads/$diags ~/desktop/testfolder
You can do the following in your script:
diags=$(ls -t ~/downloads | head -1)
mv ~/downloads/"$diags" ~/desktop/testfolder
In this case, diags is assigned the value of ls -t ~/downloads | head -1, which can be called on by mv.
The following commands
ls -t ~/downloads | head -1
read diags
are probably not what you intend: the read command does not receive its input from the command before. Instead, it waits for input from stdin, which is why you believe the script to 'hang'. Maybe you wanted to do the following (at least this was my first erroneous attempt at providing a better solution):
ls -t ~/downloads | head -1 | read diags
However, this will (as mentioned by alvits) also not work, because each element of the pipe runs as a separate command: The variable diags therefore is not part of the parent shell, but of a subprocess.
The proper solution therefore is:
diags=$(ls -t ~/downloads | head -1)
There are, however, further possible problems, which would make the subsequent mv command fail:
The directory might be empty.
The file name might contain spaces, newlines etc.

How to run fswatch to call a program with static arguments?

I used to use fswatch v0.0.2 like so (in this instance to run django test suit when a file changed)
$>fswatch . 'python manage.py test'
this works fine.
I wanted to exclude some files that were causing the test to run more than once per save (Sublime text was saving a .tmp file, and I suspect .pyc files were also causing this)
So I upgraded fswatch to enable the -e mode.
However the way fswatch has changed which is causing me troubles - it now accepts a pipe argument like so:
$>fswatch . | xargs -n1 program
I can't figure out how to pass in arguments to the program here. e.g. this does not work:
$>fswatch . | xargs -n1 python manage.py test
nor does this:
$>fswatch . | xargs -n1 'python manage.py test'
how can I do this without packaging up my command in a bash script?
fswatch documentation (either the Texinfo manual, or the wiki, or README) have examples of how this is done:
$ fswatch [opts] -0 -o path ... | xargs -0 -n1 -I{} your full command goes here
Pitfalls:
xargs -0, fswatch -0: use it to make sure paths with newlines are interpreted correctly.
fswatch -o: use it to have fswatch "bubble" all the events in the set into a single one printing only the number of records in the set.
-I{}: specifying a placeholder is the trick you missed for xargs interpreting correctly your command arguments in those cases where you do not want the record (in this case, since -o was used, the number of records in the set) to be passed down to the command being executed.
Alternative answer not fighting xargs' default reason for being - passing on the output as arguments to the command to be run.
fswatch . | (while read; do python manage.py test; done)
Which is still a bit wordy/syntaxy, so I have created a super simple bash script fswatch-do that simplifies things for me:
#!/bin/bash
(while read; do "$#"; done)
usage:
fswatch -r -o -e 'pyc' somepath | fswatch-do python manage.py test someapp.SomeAppTestCase

Resources