Automatically connect to FTP server using batch commands, continue with batch script and then run FTP commands when necessary - windows

Here's what I wish to do. Is it possible using a batch (Windows XP) file?
Prompt user to login to a FTP server.
Carry on with the rest of the batch commands, whilst keeping the FTP session/login alive.
Use the FTP PUT command when required with a batch command.

No, I cannot think of a way to keep the ftp process alive while continuing to run the batch file, and sending commands to the ftp process.
However, if this fits your needs, your batch file can collect all the data it needs first, generate a file containing all the ftp commands, then pass that file to ftp.exe as a last step.
For example:
SET /P ftpuser=Username:
SET /P ftppass=Password:
:: generate ftp script file
ECHO %ftpuser% > ftpcommands.txt
ECHO %ftppass% >> ftpcommands.txt
ECHO put file.txt >> ftpcommands.txt
ECHO quit >> ftpcommands.txt
:: now call ftp and have it process all the commands
ftp -s:ftpcommands.txt server.com

Windows ftp supports batch processing with its -s switch. However, there is no good way to keep an FTP session alive while waiting for more commands. The ftp command will process its -s script from start to finish and then exit.
Other than having the batch script generate the script used by ftp -s and running it as needed, the only solution I can think of would be to map the FTP server as a drive letter and copy as needed.

Related

How to open more than 1000 SSH sessions at one click using PuTTY?

I want to open 1000 SSH sessions from remote server using PuTTY in Windows 10 for test purposes. How to do that in an easy way?
You can use a batch file like this:
for /l %%N in (1 1 1000) do putty.exe -load "your session"
Based on: How to execute one line multiple times using windows batch file?

Redirect ssh output to file while performing other commands

I'm looking for some help with a script of mine. I'm new at bash scripting and I'm trying to start a service on a remote host with ssh and then capture all the output of this service to a file in my local host. The problem is that I also want to execute other commands after this one:
ssh $remotehost "./server $port" > logFile &
ssh $remotehost "nc -q 2 localhost $port < $payload"
Now, the first command starts an HTTP server that simply prints out any request that it receives, while the second command sends a request to such server.
Normally, if I were to execute the two commands on two separate shells I would get the first response on the terminal, but now I need it on the file.
I would like to have the server output all the requests on the log file, keeping a sort of open ssh connection to receive any new output of the server process.
I hope I made myself clear.
thank you for your help!
EDIT: Here's the output of the first command:
(Output is empty in the terminal... it waits for requests).
As you can see the commands doesn't return anything yet but it waits.
When I execute the second command on a new terminal (the request), the output of the first terminal is the following:
The request is displayed.
Now I would like to execute both commands in sequence in a bash script, sending the output of the first terminal (which is null until the second command is run) to a file so that ANY output, triggered by later issued requests, is sent to a file.
EDIT2: As of now, with the commands above, the server answers any requests but the output is not registered in the log file.

How to prevent PuTTY shell from auto-exit after executing command from batch file in Windows?

I have written a batch file like this:
Start putty.exe -ssh 172.17.0.52 -l root -m dummy.txt
Then in dummy.text I have written this command:
avahi-daemon --no-drop-root -D
export XVHMI_USERCONFIG_PATH=/home/UserProfileConfig
export XDG_RUNTIME_DIR=/tmp
cd /opt/bosch/airis/bin
When I run the .bat file, PuTTY starts, commands execute (hopefully, not sure) and it exits.
How to keep that window open?
I have googled for the same, but no solid help. I read on stack overflow itself that we need to define something in txt file, but what and most importantly how?
The SSH session closes (and PuTTY with it) as soon as the command finishes. Normally the "command" is shell. As you have overridden this default "command" and yet you want to run the shell nevertheless, you have to explicitly execute the shell yourself:
avahi-daemon ... ; /bin/bash
Also as use of -m switch implies a non-interactive terminal, you probably want to force an interactive terminal back using -t switch.
Though, I'm not really sure if you want to execute shell or if you just want to see your command output. If the latter, did you consider using plink? It's console terminal client from PuTTY package. Being console application, it inherits console of parent batch file, and you can pause the batch console from closing using pause command, if needed.
Another option (both for PuTTY and plink) is to pause on remote side. E.g. Using read command.
avahi-daemon ... ; read
As suggested by Martin I tried this step:
putty.exe -ssh 172.17.0.52 -l root -m dummy.txt -t
added /bin/bash at the end of commands in dummy.txt
It worked for me. Please note, you have to follow both the steps as mentioned above.
This way you can keep the session alive and can manually execute further commands.

Running remotely Linux script from Windows and get execution result code

I have the current scenario to deal with:
I have to schedule the backup of my company's Linux-based server (under Suse Linux) with ARCServe R15 (installed on Windows 2003R2SP2).
I know I have the ability in my backup software (ARCServe) to add pre/post execution scripts to my backup-jobs.
If failure of the script, ARCServe would be specified NOT to run the backup-job, and if success, specified to be run. I have no problem with this.
The problem is, I want to make a windows script (to be launched by ARCServe) for executing a Linux script on the cluster:
- If this Linux-script fails, I want my windows-script to fail, so my backup job in ARCServe wouldn't run
- If the Linux-script success, I want my windows-script to end normally with error code 0, so my ARCServe job would run normally.
I've tried creating this batch file (let's call it HPC.bat):
echo ON
start /wait "C:\Program Files\PUTTY\plink.exe" -v -l root -i "C:\IST\admin\scripts\HPC\pri.ppk" [cluster_name] /appli/admin/backup_admin
exit %errorlevel%
If I manually launch this .bat by double-clicking on it, or launching it in a command prompt under Windows, it executes normally and then ends.
If I make it being launched by ARCServe, the script seems never to end.
My job stays in "waiting" status, it seems the execution code of the linux script isn't returned to my batch file, and this one doesn't close.
In my mind, what's happening is plink just opens the connection to the Linux, send the sript execution signal, and then close the connection, so the execution code can't be returned to the batch. Am I right ?
Is what I want to do possible or am I trying something impossible to do ?
So, do I have to proceed differently ?
Do I have to use PUTTY or CygWin instead of plink ?
Please, it's giving me headaches ...
If you install Cygwin, you could do it exactly like you can do it on Linux to Linux, i.e. remotely run a command with ssh someuser#remoteserver.com somecommand
This command will return with the same return code on the calling client, as the command exited with on the remote end. If you use SSH shared keys for authentication instead of passwords, it can also be scripted without user interaction.

Send command to a background process

I have a previously running process (process1.sh) that is running in the background with a PID of 1111 (or some other arbitrary number). How could I send something like command option1 option2 to that process with a PID of 1111?
I don't want to start a new process1.sh!
Named Pipes are your friend. See the article Linux Journal: Using Named Pipes (FIFOs) with Bash.
Based on the answers:
Writing to stdin of background process
Accessing bash command line args $# vs $*
Why my named pipe input command line just hangs when it is called?
Can I redirect output to a log file and background a process at the same time?
I wrote two shell scripts to communicate with my game server.
This first script is run when computer start up. It does start the server and configure it to read/receive my commands while it run in background:
start_czero_server.sh
#!/bin/sh
# Go to the game server application folder where the game application `hlds_run` is
cd /home/user/Half-Life
# Set up a pipe named `/tmp/srv-input`
rm /tmp/srv-input
mkfifo /tmp/srv-input
# To avoid your server to receive a EOF. At least one process must have
# the fifo opened in writing so your server does not receive a EOF.
cat > /tmp/srv-input &
# The PID of this command is saved in the /tmp/srv-input-cat-pid file
# for latter kill.
#
# To send a EOF to your server, you need to kill the `cat > /tmp/srv-input` process
# which PID has been saved in the `/tmp/srv-input-cat-pid file`.
echo $! > /tmp/srv-input-cat-pid
# Start the server reading from the pipe named `/tmp/srv-input`
# And also output all its console to the file `/home/user/Half-Life/my_logs.txt`
#
# Replace the `./hlds_run -console -game czero +port 27015` by your application command
./hlds_run -console -game czero +port 27015 > my_logs.txt 2>&1 < /tmp/srv-input &
# Successful execution
exit 0
This second script it just a wrapper which allow me easily to send commands to the my server:
send.sh
half_life_folder="/home/jack/Steam/steamapps/common/Half-Life"
half_life_pid_tail_file_name=hlds_logs_tail_pid.txt
half_life_pid_tail="$(cat $half_life_folder/$half_life_pid_tail_file_name)"
if ps -p $half_life_pid_tail > /dev/null
then
echo "$half_life_pid_tail is running"
else
echo "Starting the tailing..."
tail -2f $half_life_folder/my_logs.txt &
echo $! > $half_life_folder/$half_life_pid_tail_file_name
fi
echo "$#" > /tmp/srv-input
sleep 1
exit 0
Now every time I want to send a command to my server I just do on the terminal:
./send.sh mp_timelimit 30
This script allows me to keep tailing the process on your current terminal, because every time I send a command, it checks whether there is a tail process running in background. If not, it just start one and every time the process sends outputs, I can see it on the terminal I used to send the command, just like for the applications you run appending the & operator.
You could always keep another open terminal open just to listen to my server server console. To do it just use the tail command with the -f flag to follow my server console output:
./tail -f /home/user/Half-Life/my_logs.txt
If you don't want to be limited to signals, your program must support one of the Inter Process Communication methods. See the corresponding Wikipedia article.
A simple method is to make it listen for commands on a Unix domain socket.
For how to send commands to a server via a named pipe (fifo) from the shell see here:
Redirecting input of application (java) but still allowing stdin in BASH
How do I use exec 3>myfifo in a script, and not have echo foo>&3 close the pipe?
You can use the bash's coproc comamnd. (avaliable only in 4.0+) - it's like ksh's |&
check this for examples http://wiki.bash-hackers.org/syntax/keywords/coproc
you can't send new args to a running process.
But if you are implementing this process or its a process that can take the args from a pipe, then the other answer would help.

Resources