Ruby expect sft script runs from command line, fails to log in from cron - ruby

I have a ruby program that uses expect to perform a password-login sftp session.
When I run the program from the command line, it works, but when I let cron run it, the remote server never acknowledges that the program has sent the password.
A shell-based expect script was written by somebody else, which works from cron
Is this problem related to ptys? Any ideas on how to troubleshoot?

Related

Execute bash script that will continue though Apache restarts

I need to have a bash script triggered and run, but part of the script requires Apache to restart. This obviously kills the script from continuing. I can't move the restarts in the script to the end
I have tried to run the bash scrip though a php script using shell_exec() in a GNU screen session to keep it going but that doesn't work. as soon as Apache goes down the script stops.
There has to be a way to do this but I'm not seeing it.
How I can accomplish this?
Does nohup do the job?
nohup is a POSIX command which means "no hang up". Its purpose is to execute a command such that it ignores the HUP (hangup) signal and therefore does not stop when the user logs out.
Output that would normally go to the terminal goes to a file called nohup.out, if it has not already been redirected.
https://en.wikipedia.org/wiki/Nohup

How to send input to a console/CLI program running on remote host using bash?

I have a script that I normally launch using the following syntax:
ssh -Yq user#host "xterm -e '. /home/user/bin/prog1 $arg1;prog2'"
(note: I've removed some of the complexities of the command, so please excuse if there are any syntax errors in the ssh command; it should not be relevant to the question)
This launches an xterm window that runs prog1, and after completion runs prog2. prog2 is a console-style program that performs some setup, then several seconds later waits for user input.
Is there a way via bash script (preferably without downloading external packages) that I can send data to prog2 that's running on $host?
I've looked into << and expect, but it's way over my head. My intuition is that there's probably a straightforward way of doing this, but I can't figure out what terms to search for. I also understand that I can remotely send keystrokes to a host using xdotools or something similar, but I'm hesitant to request a new package installation unless I know that's the only reasonable solution.
Thanks!

How do I embed an expect script within a bash script so the shell I open doesn't close after the expect script finishes?

I've been writing Bash scripts that work with a database lately.
To access the database, I need to ssh into a DiskStation (requires password) and then sudo a docker command (requires password) to access the container that the database is in. Only then can I execute and test out my scripts.
I wrote an expect script that automates this process and I want to embed it in my Bash scripts, but the only problem is the shell closes as soon as the expect script finishes executing.
Does anybody know how to work around this? I attached a photo with specific info removed. Bash script with embedded expect script

Simple script run via cronjob doesn't work but works from shell

I am on shared hosting and I'm trying to schedule cronjob to run every now and then. Via cPanel I scheduled to execute my script but even though that according to my host support the cronjob runs, the script doesn't seem as doing anything. The cron job command I set via cPanel is:
/bin/sh /home1/myusername/public_html/somefolder/cronjob2.sh
and the cronjob2.sh
#!/bin/bash
/home1/myusername/public_html/somefolder/node_modules/forever/bin/forever stop 0
when via SSH I execute:
/home1/myusername/public_html/somefolder/cronjob2.sh
it stops forever process as needed. From cronjob doesn't do anything.
How can I get this working?
EDIT:
So I've tried:
/bin/sh /home1/username/public_html/somefolder/cronjob2.sh >> /tmp/mylog 2>&1
and mylog entries say:
/usr/bin/env: node: No such file or directory
It seems that forever needs to run node and this cannot be found. How would I possibly fix this?
EDIT2:
Accepted answer at superuser.com. Thank you all for help
https://superuser.com/questions/763261/simple-script-run-via-cronjob-doesnt-work-but-works-from-shell/763288#763288
For cron job lines in a crontab it's not required to specify kind of shell or e.g. of perl.
It's enough, that your script contains
shebang
line.
Therefore you should remove /bin/sh from your cron job line.
Another aspect, that might cause a different behavior of your script by interactive start and by cron daemon start is possible different environment, first of all the PATH variable. Therefore check, if you script is able to be executed in very restricted environment, that is provided by cron daemon. You can determine your cron job environment experimentally by start of temporary cron job, that executes "env" command and writes its output to a file.
Once more aspect: Have you redirected STDOUT and STDERR of the cron job to a log file and read its content to analyze the issue? You can do it as follows:
your_cron_job >/tmp/any_name.log 2>&1
According to what you wrote, when you run your script via SSH, you are using bash, because this line is the first of your script:
#!/bin/bash
However, in the crontab, you are forcing the use of sh instead of bash. Are you sure your script is fully compatible with sh? Otherwise, simply replace /bin/sh with /bin/bash in your cron command and test again.

Running remotely Linux script from Windows and get execution result code

I have the current scenario to deal with:
I have to schedule the backup of my company's Linux-based server (under Suse Linux) with ARCServe R15 (installed on Windows 2003R2SP2).
I know I have the ability in my backup software (ARCServe) to add pre/post execution scripts to my backup-jobs.
If failure of the script, ARCServe would be specified NOT to run the backup-job, and if success, specified to be run. I have no problem with this.
The problem is, I want to make a windows script (to be launched by ARCServe) for executing a Linux script on the cluster:
- If this Linux-script fails, I want my windows-script to fail, so my backup job in ARCServe wouldn't run
- If the Linux-script success, I want my windows-script to end normally with error code 0, so my ARCServe job would run normally.
I've tried creating this batch file (let's call it HPC.bat):
echo ON
start /wait "C:\Program Files\PUTTY\plink.exe" -v -l root -i "C:\IST\admin\scripts\HPC\pri.ppk" [cluster_name] /appli/admin/backup_admin
exit %errorlevel%
If I manually launch this .bat by double-clicking on it, or launching it in a command prompt under Windows, it executes normally and then ends.
If I make it being launched by ARCServe, the script seems never to end.
My job stays in "waiting" status, it seems the execution code of the linux script isn't returned to my batch file, and this one doesn't close.
In my mind, what's happening is plink just opens the connection to the Linux, send the sript execution signal, and then close the connection, so the execution code can't be returned to the batch. Am I right ?
Is what I want to do possible or am I trying something impossible to do ?
So, do I have to proceed differently ?
Do I have to use PUTTY or CygWin instead of plink ?
Please, it's giving me headaches ...
If you install Cygwin, you could do it exactly like you can do it on Linux to Linux, i.e. remotely run a command with ssh someuser#remoteserver.com somecommand
This command will return with the same return code on the calling client, as the command exited with on the remote end. If you use SSH shared keys for authentication instead of passwords, it can also be scripted without user interaction.

Resources