I'm trying to run the following command:
ssh -A -t -i ~/.ssh/DevKP.pem -o StrictHostKeyChecking=no root#MyServer "for file in \`ls /root/spark/work/ \`; do echo 'file - ' $file; done"
The output is:
file -
file -
Connection to MyServer closed.
When I ran the command on the remote server itself:
for file in `ls /root/spark/work/ `; do echo 'file - ' $file; done
I get the output:
file - test1.txt
file - test2.txt
How do I get ti to work on the local server? it seems that it gets the right files (because there were two sysouts)
anyone has any idea?
thanks
You need to escape the $ in $file to make sure the remote shell interprets it instead of your local. You should also simplify the ls /root/.. to for file in /root/../*:
ssh root#MyServer "for file in /root/spark/work/* ; do echo 'file - ' \$file; done"
Related
My aim is to create a shell script such that it logins and filter the list of files available and select a file to get. Here I need to run commands like in bash.
My sample code is:
sshpass -p password sftp user#10.10.10.10 <<EOF
cd /home/
var=$(ls -rt)
echo $var
echo "select a folder"
read folder
cd $folder
filen=&(ls -rt)
echo $filen
echo "select a file"
read name
get $name
bye
EOF
The above approach will not work. Remember that the 'here document' (<<EOF ... EOF) is evaluate as input to the sftp session. Prompts will be displayed, and user input will be requested BEFORE any output (ls in this case) will be available from sftp.
Consider using lftp, which has more flexible construct. In particular, it will let you use variables, create command dynamically, etc.
lftp sftp://user#host <<EOF
cd /home
ls
echo "Select Folder"
shell 'read folder ; echo "cd $folder" >> temp-cmd'
source temp-cmd
ls
echo "Select Folder"
shell 'read file ; echo "get $file" >> temp-cmd'
source temp-cmd
EOF
In theory, you can create similar constructs with pipes and sftp (may be a co-process ?), but this is much harder.
Of course, the other alternative is to create different sftp sessions for listing, but this will be expensive/inefficient.
After some research and experimentation, found a way to create batch/interactive sessions with sftp. Posting as separate answer, as I still believe the easier way to go is with lftp (see other answer). Might be used on system without lftp
The initial exec create FD#3 - pointing to the original stdout - probably user terminal. Anything send to stdout will be executed by the sftp in the pipeline.
The pipe is required to allow both process to run concurrently. Using here doc will result in sequential execution. The sleep statement are required to allow SFTP to complete data retrieval from remote host.
exec 3>&1
(
echo "cd /home/"
echo "ls"
sleep 3 # Allow time for sftp
echo "select a folder" >&3
read folder
echo "cd $folder"
echo "ls"
sleep 3 # Allow time for sftp
echo "select a file" >&3
read name
echo "get $name"
echo "bye"
) | sshpass -p password sftp user#10.10.10.10
I would suggest you to create a file with pattern of the files you want downloaded and then you can get files downloaded in one single line:
sftp_connection_string <<< $"ls -lrt"|grep -v '^sftp'|grep -f pattern_file|awk '{print $9}'|sed -e 's/^/get -P /g'|sftp_connection_string
if there are multiple definite folders to be looked into, then:
**Script version**
for fldr in folder1 folder2 folder3;do
sftp_connection_string <<< $"ls -lrt ${fldr}/"|grep -v '^sftp'|grep -f pattern_file|awk '{print $9}'|sed -e "s/^/get -P ${fldr}/g"|sftp_connection_string
done
One-liner
for fldr in folder1 folder2 folder3;do sftp_connection_string <<< $"ls -lrt ${fldr}/"|grep -v '^sftp'|grep -f pattern_file|awk '{print $9}'|sed -e "s/^/get -P ${fldr}\//g"|sftp_connection_string;done
let me know if it works.
I would need a .sh script that allows me to read only the second line of a file and then send it to machine B.
Example file:
timestamp_pippo.csv
"Row1_skipped"
"Row2_send_to_machine"
the file is in the path:
C:\Program Files\Splunk\var\run\splunk\csv
only the second row "row2_send_to_machine" (contains a unix command) must be sent to machine B
once the command has been sent, the file timestamp_pippo.csv must be deleted.
can you help me? I'm not familiar with .sh
what I've managed to create so far is only this:
for a in $(C:\Program Files\Splunk\var\run\splunk\csv cat timestamp_pippo.csv|grep -v Row1_skipped);do
ssh unix_machine#11.111.111.11 $a
done
Since you want to retain the for loop:
for cmd in $(head -2 timestamp_pippo.csv | tail -1); do ssh <machine> $cmd; done
Though tbh, this is bad - if you actually extend this and use the loop, you will be doing multiple connects to the ssh machine. Better to create the batch file you want, then do one ssh and run the batch. Here's a decent explanation of running a local script on a remote host: https://unix.stackexchange.com/questions/313000/run-local-script-with-local-input-file-on-remote-host
Thanks for the reply. I've solved with this script:
path="/home/weblogic/testCSV/*.csv"
for a in $(ls -lrt $path|awk '{print $9}');do
#echo $(head -2 $a | tail -1)
ssh unix_machine#11.111.111.11 $(head -2 $a | tail -1)
rm $a
#echo "file $a removed"
break
#echo "send command"
done
Steps:
-check the file
-execute the old file
-remove the file
The command we have to send to machine B is in the second line of the file timestamp_pippo.csv.
Another question:
How I can authenticate me in the machine B?
BR
Im trying to execute the following script to get the currentpath string from a file and pass it to find and replace in the next command. it works fine when running directly in the host server , but when tried to execute from jenkins build step, i'm getting a failure that file not found.
Error: sed: can't read test.txt: No such file or directory
Expected result is to get the "test.txt" file updated with "newPath" wherever the "currentPath" exists
code :
user="testuser"
host="remotehost"
newPath="/testpath/"
filetoUpdate="./test.txt" # this is a file
ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i ~/.ssh/id_rsa ${user}#${host} "currentPath="$(sed -n '/^PATH='/p $filetoUpdate | cut -d'=' -f2)" ; echo "currentPath was "$currentPath"" ; sed -n 's|$currentPath|$newPath|g' $filetoUpdate"
One of the problems is that locally defined variables newPath and filetoUpdate will not be available in script running on remote host. Also in this script only currentPath= will be executed on remote host, rest of the commands run locally.
I'd recommend to save script in a separate file.
test.sh:
newPath="/testpath/"
filetoUpdate="./test.txt"
currentPath=`sed -n '/^PATH=/p' $filetoUpdate | cut -d= -f2`
echo "currentPath was "$currentPath""
sed -i .bak "s|$currentPath|$newPath|g" $filetoUpdate
(I added -i for in-site editing)
And to run it with the command
ssh remotehost < t.sh
I want to automate a SFTP process to transfer the last file created in local server and send it to remote server.
In local server I have "/Source/Path/" I have files named like below:
Logfile_2019-04-24
Logfile_2019-04-24_old.txt
This is my current script:
dyear=`date +'%Y' -d "1 day ago"`
dmonth=`date +'%b' -d "1 day ago"`
ddate=`date +%Y-%m-%d -d "1 day ago"`
HOST='192.168.X.X'
USER='user'
PASSWD='password'
localpath='/Source/Path/'$dyear'/'$dmonth'/'*$ddate*'.txt'
remotepath='/Destination/Path/'$dyear'/'$dmonth'/'
echo $localpath
echo $remotepath
export SSHPASS=$PASSWD
sshpass -e sftp $USER#$HOST << EOF
put '$localpath' '$remotepath'
EOF
When I do echo $localpath it prints the correct file but in the script I get this error:
Connecting to 192.168.X.X...
sftp> put '/Source/Path/2019/Apr/*2019-04-24*' '/Destination/Path/2019/Apr/'
stat /Source/Path/2019/Apr/*2019-04-24*: No such file or directory
How would be the correct regex in this pasrt *$ddate*'.txt' in followingline:
localpath='/Source/Path/'$dyear'/'$dmonth'/'*$ddate*'.txt'
in order to transfer the file "Logfile_2019-04-24_old.txt"?
Thanks in advance
Replace
put '$localpath' '$remotepath'
with
put "$(echo $localpath)" '$remotepath'
to force wildcard (*) replacement in your here-doc.
This does not work if your wildcard is replaced by multiple files.
I don't think you need a regex for this problem. You can get the latest file created in the directory by the following shell command and assign it to your localpath variable.
ls -t directoryPath | head -n1
latestfile=`ls -t /Source/Path/$dyear/$dmonth | head -n1`
localpath='/Source/Path/'$dyear'/'$dmonth'/'$latestfile''
remotepath='/Destination/Path/'$dyear'/'$dmonth'/'
If you are able to get the filename, source and destination directories properly, you can directly use scp to copy the file to remote server:
sshpass -p $PASSWD scp $localpath $USER#$HOST:$remotepath
I'm writing a bash script to send files from a linux server to a remote Windows FTP server.
I would like to check using FTP if the folder where the file will be stored exists before attempting to create it.
Please note that I cannot use SSH nor SCP and I cannot install new scripts on the linux server. Also, for performance issues, I would prefer if checking and creating the folders is done using only one FTP connection.
Here's the function to send the file:
sendFile() {
ftp -n $FTP_HOST <<! >> ${LOCAL_LOG}
quote USER ${FTP_USER}
quote PASS ${FTP_PASS}
binary
$(ftp_mkdir_loop "$FTP_PATH")
put ${FILE_PATH} ${FTP_PATH}/${FILENAME}
bye
!
}
And here's what ftp_mkdir_loop looks like:
ftp_mkdir_loop() {
local r
local a
r="$#"
while [[ "$r" != "$a" ]]; do
a=${r%%/*}
echo "mkdir $a"
echo "cd $a"
r=${r#*/}
done
}
The ftp_mkdir_loop function helps in creating all the folders in $FTP_PATH (Since I cannot do mkdir -p $FTP_PATH through FTP).
Overall my script works but is not "clean"; this is what I'm getting in my log file after the execution of the script (yes, $FTP_PATH is composed of 5 existing directories):
(directory-name) Cannot create a file when that file already exists.
Cannot create a file when that file already exists.
Cannot create a file when that file already exists.
Cannot create a file when that file already exists.
Cannot create a file when that file already exists.
To solve this, do as follows:
To ensure that you only use one FTP connection, you create the input (FTP commands) as an output of a shell script
E.g.
$ cat a.sh
cd /home/test1
mkdir /home/test1/test2
$ ./a.sh | ftp $Your_login_and_server > /your/log 2>&1
To allow the FTP to test if a directory exists, you use the fact that "DIR" command has an option to write to file
# ...continuing a.sh
# In a loop, $CURRENT_DIR is the next subdirectory to check-or-create
echo "DIR $CURRENT_DIR $local_output_file"
sleep 5 # to leave time for the file to be created
if (! -s $local_output_file)
then
echo "mkdir $CURRENT_DIR"
endif
Please note that "-s" test is not necessarily correct - I don't have acccess to ftp now and don't know what the exact output of running DIR on non-existing directory will be - cold be empty file, could be a specific error. If error, you can grep the error text in $local_output_file
Now, wrap the step #2 into a loop over your individual subdirectories in a.sh
#!/bin/bash
FTP_HOST=prep.ai.mit.edu
FTP_USER=anonymous
FTP_PASS=foobar#example.com
DIRECTORY=/foo # /foo does not exist, /pub exists
LOCAL_LOG=/tmp/foo.log
ERROR="Failed to change directory"
ftp -n $FTP_HOST << EOF | tee -a ${LOCAL_LOG} | grep -q "${ERROR}"
quote USER ${FTP_USER}
quote pass ${FTP_PASS}
cd ${DIRECTORY}
EOF
if [[ "${PIPESTATUS[2]}" -eq 1 ]]; then
echo ${DIRECTORY} exists
else
echo ${DIRECTORY} does not exist
fi
Output:
/foo does not exist
If you want to suppress only the messages in ${LOCAL_LOG}:
ftp -n $FTP_HOST <<! | grep -v "Cannot create a file" >> ${LOCAL_LOG}