I've been trying to solve relatively small problem with moving some files across FTP servers but no luck so far.
In a nutshell this is what I'm doing, I have three servers:
SourceSFTP
TargetSFTP
Target_2_SFTP
The script is supposed to do the following
Connect to SourceFTP
Grab all files
Loop through files
Call function that takes file as parameter and does stuff to it, let's call it postfunc()
Drop the files to TargetSFTP
The problem occurs when inside postfunc I put another call to lftp to transfer file to Target_2SFTP. The command is executed properly (I can see the file moved) but then the number 5 never happens.
This is the script I have:
function postfunc() {
the_file=$1
lftp<<END_SCRIPT2
open sftp://$Target2SFTP
user $USERNAME $PASSWORD
cd /root
put $the_file
bye
END_SCRIPT2
}
echo "Downloading files from $SOURCE_SFTP"
lftp -e "echo 'testing connection';exit" -u $SOURCE_USERNAME,$SOURCE_PASSWORD $SOURCE_SFTP
lftp -e "set xfer:clobber true;mget $SOURCE_DIR*.csv;exit" -u $SOURCE_USERNAME,$SOURCE_PASSWORD $SOURCE_SFTP || exit 0
files=(*.csv)
batch=10
for ((i=0; i < ${#files[#]}; i+=batch)); do
commands=""
# Do some stuff
for((j=0; j < batch; j+=1)); do
commands=$commands"mv source_dir/${files[i+j} archivedir/${files[i+j]};"
postfunc ${files[i]}
done
echo "Archiving batch..."
lftp -e "$commands;exit" -u $SOURCE_USERNAME,$SOURCE_PASSWORD $SOURCE_SFTP
lftp<<END_SCRIPT
open sftp://$TARGET_SFTP
user $TARGET_USERNAME $TARGET_PASSWORD
cd $TARGET_DIR
mput dirr/*
bye
END_SCRIPT
done
Hopefully I'm missing something obvious... At the moment even if I move one file "Archiving batch" never shows up, if I remove contents of postfunc() everything executes correctly
Related
I have tried searching but can't find exactly what I'm after and maybe I don't even know exactly what to search for...
I need to FTP a variety of csv files from multiple sites each with different credentials.
I am able to do this one by one with the following, however I need to do this for 30 sites and do not want to copy paste all this.
What would be the best way to write this and if you can show me how or point me to an answer that would be great.
And for bonus points (I might have to ask a separate question), mget is not working linux to linux, only from linux to windows. I have also tried curl but no luck either.
Thanks a lot.
p.s. not sure if it makes a difference, but I will be running this as a cron job every 15 minutes. I'm ok with that part ;)
#!/bin/bash
chmod +x ftp.sh
#Windows site global variables
ROOT='/data'
PASSWD='passwd'
# Site 1
SITE='site1'
HOST='10.10.10.10'
USER='sitename1'
ftp -in $HOST <<EOF
user $USER $PASSWD
binary
cd "${ROOT}/${SITE}/"
lcd "/home/Downloads"
mget "${SITE}}.csv1" "${SITE}}.csv2" #needs second "}" as part of file name
quit
EOF
echo "Site 1 FTP complete"
# Site 2
SITE='site2'
HOST='20.20.20.20'
USER='sitename2'
ftp -in $HOST <<EOF
user $USER $PASSWD
binary
cd "${ROOT}/${SITE}/"
lcd "/home/instrum/Downloads"
mget "${SITE}}.csv1" "${SITE}}.csv2" #needs second "}" as part of file name
quit
EOF
echo "Site 2 FTP complete"
#Linux site Global variables
ROOT='/home/path'
USER='user'
PASSWD='passwd2'
#Site 3
SITE='site_3'
HOST='30.30.30.30'
ftp -in $HOST << EOF
user $USER $PASSWD
binary
cd "${ROOT}/${SITE}/"
lcd "/home/Downloads"
get "${SITE}file1.csv" #mget not working for linux to linux FTP, don't know why.
get "${SITE}file2.csv"
quit
EOF
echo "Site 3 FTP complete"
#Site 4
SITE='site_4'
HOST='40.40.40.40'
ftp -in $HOST << EOF
user $USER $PASSWD
binary
cd "${ROOT}/${SITE}/"
lcd "/home/Downloads"
get "${SITE}file1.csv" #mget not working for linux to linux FTP, don't know why.
get "${SITE}file2.csv"
quit
EOF
echo "Site 4 FTP complete"
For credentials, put this into a separate file, with variables for site 1 as, site1, host1, user1, and comments, so if a different user is running this script, the user would be able to understand this quickly, and also for less chance of amending the passwords on the file and creating an error. When your main script loads, you can load the file with the passwords before running the main script.
On your main script, if the functionality is similar on all sites, and you are always going to run the same code for all 30 sites as well, then you can use a while loop starting at 1 and ending at 30. In your code amend the variables, site, host and user, to insert the number at the end, to execute the code with the right variables.
There are tools for copying files for example, if these servers are on your network, for example rsync which is efficient as well. If you would like to take a look
I am quite new to shell scripting and trying to create a shell script which goes into a directory and picks the *.sql files in the folder and executes the identified files.
I know that it is possible to call the sql files from within the shell script but I want to separate the SQL files from the shell script itself as the number of scripts are quite much and it will be easier to maintain outside the shell itself.
Your help will be greatly appreciated.
You could generate a temporary shell script dynamically by including all the *.sql filenames to be executed. Then run this shell script at the end.
cd your_sql_script_path
final_script="/your_script_path/final_script.sh"
echo "sqlplus -s /nolog << EOF
CONNECT user/password;
whenever sqlerror exit sql.sqlcode;
set echo off
set heading off" > ${final_script}
for sql_file in *.sql;
do
echo "#${sql_file}" >>${final_script}
done
echo -e "exit;\nEOF" >>${final_script}
chmod +x ${final_script}
sh ${final_script}
You could try this:
$ for i in *.sql;
do
echo "$i and do your stuff";
done
output:
script01.sql and do your stuff
script02.sql and do your stuff
I have a series of bash commands, some with interactive prompts, that I need run on a remote machine. I have to have them called in a certain order for different scenarios, so I've been trying to make a bash script to automate the process for me. However, it seems like every way to start an ssh session with a bash script results in the the redirection of stdin to whatever string or file was used to initiate the script in the first place.
Is there a way I can specify that a certain script be executed on a remote machine, but also forward stdin through ssh to the local machine to enable the user to interact with any prompts?
Here's a list of requirements I have to clarify what I'm trying to do.
Run a script on a remote machine.
Somewhere in the middle of that remote script be command that will prompt for input. Example: git commit will bring up vim.
If that command is git commit and it brings up vim, the user should be able to interact with vim as if it was running locally on their machine.
If that command prompts for a [y/n] response, the user should be able to input their answer.
After the user enters the necessary information—by quitting vim or pressing return on a prompt—the script should continue to run like normal.
My script will then terminate the ssh session. The end product is that commands were executed for the user without them needing to be aware that it was through a remote connection.
I've been testing various different methods with the following script that I want run on the remote machine.
#!/bin/bash
echo hello
vim
echo goodbye
exit
It's crucial that the user be able to use vim, and then, when the user finishes, "goodbye" should be printed to the screen and the remote session should be terminated.
I've tried uploading a temporary script to the remote machine and then running ssh user#host bash /tmp/myScript, but that seems to also take over stdin completely, rendering it impossible to let the user respond to prompts for user input. I've tried adding the -t and -T options (I'm not sure if they're different), but I still get the same result.
One commenter mentioned using expect, spawn, and interact, but I'm not sure how to use those tools together to get my desired behavior. It seems like interact will result in the user gaining control over stdin, but then there's no way to have it relinquished once the user quits vim in order to let my script continue execution.
Is my desired behavior even possible?
Ok, I think I've found my problem. I was creating a wrapper script for ssh that looked like this:
#!/bin/bash
tempScript="/tmp/myScript"
remote=user#host
commands=$(</dev/stdin)
cat <(echo "$commands") | ssh $remote "cat > $tempScript && chmod +x $tempScript" &&
ssh -t $remote $tempScript
errorCode=$?
ssh $remote << RM
if [[ -f $tempScript ]]; then
rm $tmpScript
fi
RM
exit $errorCode
It was there that I was redirecting stdin, not ssh. I should have mentioned this when I formulated my question. I read through that script over and over again, but I guess I just overlooked that one line. Removing that line totally fixed my problem.
Just to clarify, changing my script to the following totally fixed my problem.
#!/bin/bash
tempScript="/tmp/myScript"
remote=user#host
commands="$#"
cat <(echo "$commands") | ssh $remote "cat > $tempScript && chmod +x $tempScript" &&
ssh -t $remote $tempScript
errorCode=$?
ssh $remote << RM
if [[ -f $tempScript ]]; then
rm $tmpScript
fi
RM
exit $errorCode
Once I changed my wrapper script, my test script described in the question worked! I was able to print "hello" to the screen, vim appeared and I was able to use it like normal, and then once I quit vim "goodbye" was printed and the ssh client closed.
The commenters to the question were pointing me in the right direction the whole time. I'm sorry I only told part of my story.
I've searched for solutions to this problem several times in the past, however never finding a fully satisfactory one. Piping into ssh looses your interactivity. Two connects (scp/ssh) is slower, and your temporary file might be left lying around. And the whole script on the command line often ends up in escaping hell.
Recently I encountered that the command line buffer size is usually quite large (getconf ARG_MAX > 2MB where I looked). And this got me thinking about how I could use this and mitigate the escaping issue.
The result is:
ssh -t <host> /bin/bash "<(echo "$(cat my_script | base64 | tr -d "\n")" | base64 --decode)" <arg1> ...
or using a here document and cat:
ssh -t <host> /bin/bash $'<(cat<<_ | base64 --decode\n'$(cat my_script | base64)$'\n_\n)' <arg1> ...
I've expanded on this idea to produce a fully working BASH example script sshx that can run arbitrary scripts (not just BASH), where arguments can be local input files too, over ssh. See here.
I have a little script that I use to send bash commands to several web servers under a load balancer. I'm able to send the command successfully, but I also want to execute it locally.
#!/bin/bash
echo "Type commands to be sent to web servers 1-8. Use ctrl+c to exit."
function getCommand() {
read thisCmd
echo "Sending '$thisCmd'..."
if [ ! -z "$thisCmd" ]; then
# Run command locally
echo "From web1"
cd ~
command $thisCmd
# Send to remotes
for i in {2..8}
do
echo "From web$i..."
ssh "web$i" "$thisCmd"
done
fi
echo Done
getCommand
}
getCommand
But this is resulting in
user#web1:~$ ./sshAll.sh
Type commands to be sent to web servers 1-8. Use ctrl+c to exit.
cd html; pwd
Sending 'cd html; pwd'...
From web1
./sshAll.sh: line 11: cd: html;: No such file or directory
From web2...
/home/user/html
How do I get this working?
When expanding a variable as a command like this:
$thisCmd
Or this
command $thisCmd
Bash would only parse it as a single command so ; and the likes would be just considered as an argument or part of it e.g. html;
So one basic solution to that is to use eval:
eval "$thisCmd"
But it's a little dangerous. Still it's just the same as those you send to the remote servers. You still execute them like how eval does it.
Edit: Updated to reflect some answers
I have this script, test.sh on my home computer:
Note: $USER = john
#!/bin/bash
/usr/bin/scp -q john#mysite.com:/home/$USER/tmp/$USER /home/$USER/tmp/ > /dev/null 2>&1
error_code="$?"
if [ "$error_code" != "0" ]; then #if file NOT present on mysite then:
echo "File does not exist."
exit
fi
echo "File exists."
Now, lets say I create the file on the server mysite.com like so:
echo > tmp/$USER
Now, when I run the above script on my desktop manually, like so:
./test.sh
I get the result "File exists."
But if I run it via crontab, I get the result "File does not exist"
My crontab looks like this:
* * * * * /home/user/test.sh >> /home/user/test.log 2>&1
I've spent all day trying to check the logic and everything... I can't seem to figure out why this is so. Thanks for all your help in advance :)
Edit: scp looks in mysite.com:/home/$USER/tmp/ dir
The $USER on my desktop and the server are same. So I don't think it's an issue of relativeness.
If I were to
ssh $USER#mysite.com
and then do
ls tmp/
I'll see the file there.
Also, the crontab entry is in my crontab, not another users' or root's crontab.
#Jonathan: I've set up key based authentication. No password required!
#netcoder: In my log file, I see repeated lines of "File does not exist."
#sarnold: in my scp line, I've put john#mysite.com, just to make sure that cron uses john's account on mysite.com when crond runs the script. Still, same result.
I expect the problem is right here: mysite.com:tmp/$USER -- tmp/ is a relative path, relative to the current working directory. When your code is executed via crond(8), your cwd might be different than when you execute it by hand.
As #netcoder points out in his comment, absolute paths are the best way to work with scripts / programs executed out of crontab(5) files.
It may be a problem with your $USER environment variable not being set when run under cron.
You could try adding something like this to your script:
echo "User: $USER" > /tmp/crontest.log
After getting cron to run the script have a look at what is in /tmp/crontest.log
If nothing is being set you might want to try something like this: Where can I set environment variables that crontab will use?