Escape whole find content to send it to a command over ssh - bash

I'm trying to use a ssh command like :
ssh user#host command -m MYFILE
MYFILE is the content of a file on my local directory.
I'm using Bash. I've tried to use printf "%q", but i'd not working. MYFILE contains spaces, new lines, single and doublequotes...
Is there a way my command gets the file content ? I can't actually run anything else than command on the remote host.

How about first transferring the file to the remote machine
scp MYFILE user#host:myfile &&
ssh user#host 'command -m "$(< myfile)" && rm myfile'

Related

Testing boolean condition with grep on a remote host in a bash script

In a shell script I'm writing, I want to check whether a file on a remote host contains the word "open".
From the command line, I can do this successfully using
ssh user#remote_host 'grep "open" remote/folder/file'
When I was testing this in a local folder, the following worked:
if grep -q "open" local/folder/file; then
echo ...
I'm not sure how to format the conditional using ssh, though. Double quotes? Brackets? Just putting the above ssh command (which works from the command line) into brackets doesn't work:
if [ ssh user#remote_host 'grep -q "open" remote/folder/file' ]; then ...
All pointers appreciated.

Passing an environment variable to a bash script over ssh from perl system

Currently I am executing run scripts using perl system by sshing into a remote machine:
system("ssh -t remote $dir/bashscript> $dir/stdout.stdout 2> $dir/stderr.stderr &");
I want to pass an environment variable to the bashscript on my remote machine (a directory to be specific). What is the best way to do it? And what should I add in my bashscript to accept the argument?
Try this:
system("ssh -t remote FOO='dir/dir/filename.stderr' $dir/bashscript> $dir/stdout.stdout 2> $dir/stderr.stderr &");
Note that if the value comes from an untrusted source this can be dangerous. You should really escape the value before you pass is like that. For example, you could do something like this:
my $foo='some value here';
$foo=~s/'/'\\''/g; # escape the '
system("ssh -t remote env FOO='$FOO' $dir/bashscript> $dir/stdout.stdout 2> $dir/stderr.stderr &");
In either case, bashscript can access the value via $FOO.
With all of #redneb's caveat's about sanitizing input
ssh remotehost FOO=bar \; export FOO \; script > out 2> err
or if you're sure that the shell on the remote host will be bash, the more compact
ssh remotehost export FOO=bar \; script > out 2> err

Why is the order of execution inversed in this bash script?

I have this script :
ssh -T user#$123.456.789.123 <<EOF
cd www
var=$(tail index.htm)
echo $var
EOF
What I thought it should do is :
Connect to the server through SSH,
then change to the folder www,
then store the tail of index.htm into the variable var
and finally echo the result.
Instead it seems that tail is executed before the change of folder, and thus doesn't find the index.htm file.
I've tried with different commands, and each time it seems the result from command substitution I'm trying to store into a variable is executed right after the SSH connexion is opened, before any other piece of script.
What am I missing here ?
The $(...) is being expanded locally, before the contents of the here document are passed to ssh. To send literal text to the remote server, quote the here document delimiter.
ssh -T user#$123.456.789.123 <<'EOF'
cd www
var=$(tail index.htm)
echo "$var"
EOF
(Also, quote the expansion of $var to protect any embedded spacing from the shell.)
The tail is running in the bash script on your local machine, not on the remote host. The substitution is getting made before you even execute the ssh command.
Your script can be replaced with simply:
ssh -T user#$123.456.789.123 tail www/index.htm
If you want to send those commands to the remote server, you can write
ssh -T user#$123.456.789.123 'cd www && var=$(tail index.htm) && echo $var'
Note that conditioning the next command on the result of the previous allows SSH to return a meaningful return code. In your heredoc, whatever happens (e.g. tail fails), SSH will return with $?=0 because echo will not fail.
Another option is to create a script there and launch it with ssh.

sftp put command fails when in shell script

I am trying to make a shell script that creates a mysql dump and then puts it on another computer. I have already set up keyless ssh and sftp. They script below creates the mysql dump file on the local computer when it is run and doesn't throw any errors, however the file "dbdump.db" is never put on the remote computer. If I execute the sftp connection and put command by hand then it works.
contents of mysql_backup.sh
mysqldump --all-databases --master-data > dbdump.db
sftp -b /home/tim tim#100.10.10.1 <<EOF
put dbdump.db
exit
EOF
Try to use scp that should be easier in your case.
scp dbdump.db tim#100.10.10.1:/home/tim/dbdump.db
Both sftp and scp are using SSH.
Please write mput/put command into one file (file_contains_put_command) and try below command.
sftp2 -B file_contains_put_command /home/tim tim#100.10.10.1 >> log_file
Example:
echo binary > sample_file
echo mput dbdump.db >> sample_file
echo quit >> sample_file
sftp2 -B sample_file /home/tim tim#100.10.10.1 >> log_file
Your initial approach is a few characters off working though. You're telling sftp to read it's batch-commands from /home/tim -b /home/tim. So, if you change this to -b -, it should read it's batch commands from stdin.
Something along these lines, if -b /home/tim were intended to i.e. change directory remotely, you can add cd /home/tim to your here document.
mysqldump --all-databases --master-data > dbdump.db
sftp -b - tim#100.10.10.1 <<EOF
put dbdump.db
exit
EOF

A script to ssh into a remote folder and check all files?

I have a public/private key pair set up so I can ssh to a remote server without having to log in. I'm trying to write a shell script that will list all the folders in a particular directory on the remote server. My question is: how do I specify the remote location? Here's what I've got:
#!/bin/bash
for file in myname#example.com:dir/*
do
if [ -d "$file" ]
then
echo $file;
fi
done
Try this:
for file in `ssh myname#example.com 'ls -d dir/*/'`
do
echo $file;
done
Or simply:
ssh myname#example.com 'ls -d dir/*/'
Explanation:
The ssh command accepts an optional command after the hostname and, if a command is provided, it executes that command on login instead of the login shell; ssh then simply passes on the stdout from the command as its own stdout. Here we are simply passing the ls command.
ls -d dir/*/ is a trick to make ls skip regular files and list out only the directories.

Resources