Executing command remotely | Keep only 5 recent files/directories - bash

I created a test directory on a remote server. To simulate this command, I created 6 files inside this test directory. The expected behavior of the command is to only keep 5 recent files in the directory. Good news is this command works!
My only trouble is I cannot execute the same command remotely. Only difference is the "" notation for the remote execution:
ssh account#someremoteserver.com "rm -rf `ls -t /usr/local/testingCommands | awk 'NR>5'`"
The reason for this is.. I have a Jenkins CI server that needs to remotely clean up the remote server, only keeping 5 most recent files.
Any help is greatly appreciated. Thanks!

ssh account#someremoteserver.com '/usr/bin/ls -1Qt /usr/local/testingCommands | awk "NR > 5" | /usr/bin/xargs /usr/bin/rm -rf'
Remarks:
Use absolute directory for all commands (security reasons)
rm ... $(...) really too dangerous, so invert rm and ls commands:
use ls command with one file per line option (-1), quoted filenames (-Q) for filenames with spaces

Related

Executing Commands from Windows on Unix via Batchfile

I have a problem that cannot find a solution for quite a while.
I want to execute following line from a Batch file on my windows machine:
ssh %1#%2 "D: && ssh %3#%4 cd /media/usbmsd/ && cp "$(ls -t /media/usbmsd | head -1)" /buffer"
THis batch file will be later executed from a cmd line with the parameters. I am trying to access one system (windows) via ssh and that hop again via ssh to the another system(unix) and there I need to find the newest file in the /media/usbmsd directory and copy it to the folder buffer.
When I excetuting it from my cmd line i am getting following error:
'head' is not recognized as an internal or external command,
operable program or batch file.
I have to say that I am not very experienced with this kind of application and am happy about any help
Greeting Denis
You could connect via a proxy jump, then you don't need a second ssh command.
Then a bit obscure escaping of the quotes is necessary.
(I tested this without the intermediate windows client)
The first caret ^ is necessary to quote from the cp command, the backslashes are necessary to convince the first expansion to leave the quotes untouched.
ssh %3#%4 -o ProxyJump %1#%2 ^"cd /media/usbmsd/ && cp \"$(ls -t /media/usbmsd | head -1)\" /buffer'
But it should be much easier to place a script on the destination host mycopyscript.sh
cd /media/usbmsd/ && cp "$(ls -t /media/usbmsd | head -1)" /buffer
And then use:
ssh %3#%4 -o ProxyJump %1#%2 "./mycopyscript.sh"

Shell script run by Jenkins creates non-terminating process

I'm trying to use Jenkins to create a special git repository. I created a free-style project that just executes a shell script. When I execute this script by hand, without Jenkins, it works just fine.
From Jenkins, however, it behaves quite differently.
# this will remove all subtrees
git log | grep git-subtree-dir | tr -d ' ' | cut -d ":" -f2 | sort | uniq | xargs -I {} bash -c 'if [ -d $(git rev-parse --show-toplevel)/{} ] ; then rm -rf {}; fi'
rm -rf .git
If this part is executed by Jenkins, in console output I see this kind of errors:
rm: cannot remove '.git/objects/pack/pack-022eb85d38a41e66ad3f43a5f28809a5a3ee4a0f.pack': Device or resource busy
rm: cannot remove '.git/objects/pack/pack-05630eb059838f149ad30483bd48d37f9a629c70.pack': Device or resource busy
rm: cannot remove '.git/objects/pack/pack-26f510b5a2d15ba9372cf0a89628d743811e3bb2.pack': Device or resource busy
rm: cannot remove '.git/objects/pack/pack-33d276d82226c201eedd419e5fd24b6b906d4c03.pack': Device or resource busy
I modified this part of the script like this:
while true
do
if rm -rf .git ; then
break
else
continue
fi
done
But this doesn't help. In the task manager I see a git process that just doesn't terminate.
I conjured said script by a lot of googling and I do not understand very good what's going on.
Jenkins runs on Windows Server 2012 behind IIS; shell scripts are executed by bash shipped with git for Windows.
1/ Ensure your path is correct, and no quote/double quote escaping occurs in the process of jenkins job starting.
2/ Your command line is a bit too handy to be correctly and safely evaluated.
Put your commands in a regular script, starting with #!/bin/bash instead of thru the command line.
xargs -I {} bash -c 'if [ -d $(git rev-parse --show-toplevel)/{} ] ; then rm -rf {}; fi'
becames
xargs -I {} /path/myscript.sh {}
with
#!/bin/bash
rev-parse="$(git rev-parse --show-toplevel)"
wait
if [ -d ${rev-parse}/${1} ] ; then
rm -rf ${1}
fi
Please note that your script is really unsafe, as you rm -rf a parameter without even evaluate it before… !
3/ You can add a wait between the git and the rm to wait for the end of the git process
4/ log your git command into a log file, with a redirection >> /tmp/git-jenkins-log
5/ put all of those commands in a script (see #2)
Following is an infinite loop in case rm -rf fail
while true
do
if rm -rf .git ; then
break
else
continue
fi
done
indeed continue can be used in for or while loop to get the next entry but in this while loop it will run the same rm command forever.
Well, aparrently I was able to fix my issue by running the script from different user.
By default on Windows Jenkins executes all jobs from the user SYSTEM. I have no idea why it affects the behaviour of my script but running it with psexec from specially created user account worked.
In case anyoune is interested, I did something like this:
psexec -accepteula -h -user Jenkins -p _password_ "full/path/to/bash.exe" full/path/to/script.sh

How to build script bash with SFTP connection to pull files

I'm implementing agent script bash to pull files from the remote server with SFTP service.
The script must:
connect SFTP
file listing
cycling on files found
get every file and copy agent side
after that files copied must be deleted
The script is followed:
#!/bin/bash
SFTP_CONNECTION="sftp -oIdentityFile=/home/account_xxx/.ssh/service_ssh user#host"
DEST_DATA=/tmp/test/data/
# GET list file by ls command ###############
$SFTP_CONNECTION
$LIST_FILES_DATA_OSM1 = $("ls fromvan/test/data/test_1")
echo $LIST_FILES_DATA_OSM1
for file in "${LIST_FILES_DATA_OSM1[#]}"
do
$SFTP_CONNECTION get $file $DEST_DATA
$SFTP_CONNECTION rm $file
done
I tried the script but it seems that the connection and command execution (ls) are distinct on thread separated.
How can I provide command sequential as described above ?
Screenshoot:
Invalid find command
SSH it seem not available
RSYNC result to take the files is the followed:
Thanks
First of all, I would recommend the following syntax changes:
#!/bin/bash
sftp_connection() {
sftp -oIdentityFile=/home/account_xxx/.ssh/service_ssh user#host "$#";
}
Dest_Data=/tmp/test/data/
# GET list file by ls command ###############
sftp_connection
List_Files_D_OSM1=$("ls fromvan/test/data/test_1")
echo "$LIST_FILES_DATA_OSM1"
for file in "${LIST_FILES_DATA_OSM1[#]}"
do
sftp_connection get "$file" $Dest_Data
sftp_connection rm "$file"
done
Quoting $file and $List_Files_D_OSM1 to prevent globbing and word splitting.
Assignments can't start with a $, otherwise bash will try to execute List_Files_D_OSM1 and will complain with a command not found
No white spaces in assignments like List_Files_D_OSM1 = $("ls fromvan/test/data/test_1")
You can use ShellCheck to catch this kind of errors.
Having said that, it is in general not a good idea to use ls in such way.
What you can use instead is something like find. For example:
find . -type d -exec echo '{}' \;
Use a different client. lftp supports sftp as a transport, and has a subcommand for mirroring which will do the work of listing the remote directory and iterating over files for you.
Assuming your ~/.ssh/config contains an entry like:
Host myhost
IdentityFile /home/account_xxx/.ssh/service_ssh
...you can run:
lftp -e 'mirror -R fromvan/test/data/test_1 /tmp/test/data' sftp://user#myhost

changing directories by using cd

I executed the following command :
cd /mnt/c/Users/Daniel/Documents/Assg/ | cat file.txt
my question is why doesn't it change directory?. The output file.txt is displayed but the directory is not changed. I understand that if we execute the same command in the following order, it won't work because cd changes directory in a child process, so the net result is the same.
cat file.txt | cd /mnt/c/Users/Daniel/Documents/Assg/
Try just cd /mnt/c/Users/Daniel/Documents/Assg/
As was already stated, the following:
cd /mnt/c/Users/Daniel/Documents/Assg/
should do the trick, but I'd like to go a bit more into why the command you presented doesn't work as expected. In Bash (and other shells), you can have multiple "subshells" running under a parent shell. each of these subshells has its own working directory. When you run commands in a pipeline, as you have done, a subshell is created. The working directory of the subshell was changed, but that didn't have any effect on the shell you were working in.
It depends on the shell you use
When you run two commands in a pipeline, typically one or both of the commands is run in a separate child process.
In older shells this would be both, in later shells this can be either
the first or the last.
At one point, the ksh93 team decided to make the last command in the pipeline the parent. This would prevent race conditions, and if the command was a builtin, it allows it to run inside the current shell
process and preserve the results of the pipeline.
Nevertheless, cd is a command that does not consume or produce any input or output (except for diagnostics on stderr), and using it in a pipeline
by itself is just silly. A better, because more predictable, command line would be:
cd /mnt/c/Users/Daniel/Documents/Assg/ && cat file.txt
This will assure that cat only runs if cd succeeds, and will then
show the contents of file.txt from the given directory.
You have different options.
Perform cat after trying to change dir
cd /mnt/c/Users/Daniel/Documents/Assg/ ; cat file.txt
Perform cat only when change dir worked
cd /mnt/c/Users/Daniel/Documents/Assg/ && cat file.txt
Perform cat in the other directory, but return to the current dir when finished.
(cd /mnt/c/Users/Daniel/Documents/Assg/ && cat file.txt)
# or
cat /mnt/c/Users/Daniel/Documents/Assg/file.txt
EDIT:
Your question: "why doesnt cd /mnt/c/Users/Daniel/Documents/Assg/ | cat file.txt, change directory?." can be answered two ways.
The technical explanation is given by #Henk (The pipe introduces a subshell, and environ settings in a subshell get lost when the shell exits).
The functional explanation is that you used the wrong syntax for what you are trying to accomplish.

lftp: how to recursively set permissions; firstly by directory than by file

When securing a Drupal or WordPress installation on a shared host that does not expose SSH access (a lousy situation, fwiw) lftp seems like the right approach to batch setting permissions for directories and files. The find command boasts that you can redirect its output, so one should be able to run a find, grep exclude to only match lines ending in "/" meaning a directory, and then set the permissions on such matches to 755 and perform the inverse on file matches and set to 644 and then fine tune specific files, such as settings.php and so forth.
lftp prompt> find . | grep "/$" | xargs chmod -v 755
Isn't working and I'm sure I have failed to chain these commands in the correct sequence and format.
How to get this to work?
Update: by "isn't working" I mean that the above command produces no output to the console, nor to the lftp error log. It isn't running these commands locally, fwiw. I'll reduce the command as a demonstration:
find . | grep "/$"
Will take the output of "find" and return matches, here, directories, by nature of the string match:
./daily/
./ffmpeg-installer/
./hourly/
./includes/
./includes/database/
./includes/database/mysql/
./and_so_forth_on_down
Which is cool, since I wish to perform a chmod (and internal command for lftp, with support varying by ftp server) So I expand the command like this:
find . | grep "/$" | xargs echo
Which outputs — nothing. No error output, either. The pipe from grep to xargs isn't happening.
My goal is to form the equivalent of:
chmod 755 ./daily/
chmod 755 ./ffmpeg-installer/
In lftp, the chmod command is performing an ftp-server-permissions change, not a local perms change.
For an explanation of why this does not work as expected, read on - for a solution to the given problem, scroll down.
The answer can be found in the manpage for lftp, which states that
"[s]ome commands allow redirecting their output (cat, ls, ...) to file or via pipe to external command."
So, when you are using a pipe like this on a command that does support redirection in lftp, you are piping its output to your local tools, which will eventually result in chmod trying to change the permissions for a file/directory on our local machine, and most likely fail in case you don't coincidally have the same directory layout in your current directory locally - which is probably the problem you encountered.
The grep + xargs pipe does work, I just tested the following:
lftp> find -d 2 | grep "/$"
./
./applications/
./lost+found/
./netinfo/
./packages/
./security/
./systems/
lftp> find -d 2 | grep "/$" | xargs echo
./ ./applications/ ./lost+found/ ./netinfo/ ./packages/ ./security/ ./systems/
My wild guess is that it did not appear to work for you because you did not specify a max-depth to find and the network connection + buffering in the pipe got in the way. When I try the same on a directory containing many files/subfolders it takes really long to finish and print. Did the command actually finish for you without output?
But still, what you are trying to do is not possible. As I stated, the right-hand-side of the pipe works with external commands (even if an inbuilt of the same name exists) as explained by the manual, so
lftp> chmod 644 foobar
and
lftp> echo "foobar" | xargs chmod 644
are not equivalent.
Yes, chmod is an inbuilt but used in a pipe in the client it will not execute the inbuilt - the manpage clearly states this and you can easily test this yourself. Try the following commands and check their output:
lftp> echo foo | uname -a
lftp> echo foo | ls -al
lftp> echo foo | chmod --help
lftp> chmod --help
Solution
As far as a solution to your problem is concerned, you can try something along the lines of:
#!/bin/bash
server="ftp.foo.bar"
root_folder="/my/path"
{
{
lftp "${server}" <<EOF
cd "${root_folder}"
find | grep "/$"
quit
EOF
} | awk '{ printf "chmod 755 \"%s\"\n", $0 }'
{
lftp "${server}" <<EOF
cd "${root_folder}"
find | grep -v "/$"
quit
EOF
} | awk '{ printf "chmod 644 \"%s\"\n", $0 }'
} | lftp "${server}"
This logs in to your server, cds to the folder where you want to recursively start changing the permissions, uses find + grep to find all directories, logs out, pipes this file list into awk to build chmod commands around it, repeats the whole process for files and then pipes the whole list of commands into a new lftp invocation to actually run the generated chmod commands.
You will also have to add your credentials to the lftp invocations and you might want to comment out the final | lftp "${server}" to check if it produces the desired output before you actually run the whole thing. Please report back if this works for you!

Resources