I'm running the below script on RHEL 7.9 and after connecting to the remote machine and execing pbrun, the last login message is getting displayed and sometimes it interferes with the command which is forcing me to do many more steps than necessary and it's just downright frustrating.
As I stated in the subject, I have tried this with "ssh -q" as well as with "ssh -q" AND a .hushlogin file in both my home directory as well as root's home.
I'm sure it has to do with the switching of users, but I can't figure out how to get rid of the last login message. **** MODIFYING SYSTEM FILES IS OUT OF THE QUESTION ****
There appears to be another issue when trying to remove a file, it's the "CASCU050E Failed to send data (Reason=[send failed], Rc=[111])." errors in the below output.
Any help greatly appreciated!
Thanks
Here's the script:
if [ "$#" -ne 2 ]; then
echo "You must enter exactly 2 command line arguments"
echo "argument 1 = file containing all hosts to test, one fqdn per line."
echo "argument 2 = full path to output file."
exit 1
fi
echo 'Please enter your AD passord for API authentication'
echo -n "Password: "
read -s passy
echo ""
sshpass="sshpass -p$passy ssh -q -oStrictHostKeyChecking=no"
scppass="sshpass -p$passy scp -r -q -oStrictHostKeyChecking=no"
hosts=$(cat $1)
IAM=$(whoami)
echo "*** Moving existing /tmp/perf directories out of the way. ***"
for HOST in $hosts; do
if ($sshpass $HOST ps waux | grep run-fio-tests | grep -v grep >/dev/null 2>&1)
then
echo "*** fio test is currently running on host $HOST, STOPPING THE FIO RUN. ***"
break
fi
#####
#Tried with the two following lines uncommented, no change in behavior.
#####
# $sshpass $HOST "touch .hushlogin"
# $sshpass $HOST "echo touch .hushlogin | exec pbrun /bin/su -"
dirs=$($sshpass $HOST "ls /tmp/| grep -i ^perf" | grep -v .tgz)
for dir in $dirs; do
echo "Moving existing /tmp/$dir directory to /tmp/$dir.`date +%Y%m%d-%H%M`.tgz on $HOST"
$sshpass $HOST "tar czf /tmp/$dir.`date +%Y%m%d-%H%M`.tgz -P /tmp/$dir 2>/dev/null"
$sshpass $HOST "echo chown -R $IAM /tmp/perf | pbrun /bin/su -"
$sshpass $HOST "rm -rf /tmp/$dir"
done
done
for HOST in $hosts; do
echo "*** Cleaning up on $HOST ***"
$sshpass $HOST "echo rm -rf /tmp/data/randread-raw-data-5G /tmp/data/seqread-raw-data-32G /tmp/data/seqwrite-raw-data-5G | exec pbrun /bin/su -"
$sshpass $HOST "echo rm -rf /tmp/RUNFIO.sh | exec pbrun /bin/su -"
done
Here are the errors i'm getting:
Please enter your AD passord for API authentication
Password:
*** Moving existing /tmp/perf directories out of the way. ***
Last login: Fri Aug 6 22:51:55 MST 2021
Moving existing /tmp/perf directory to /tmp/perf.20210806-2255.tgz on host1.acme.com
Last login: Fri Aug 6 22:55:38 MST 2021
*** End Moving old perfs. ***
*** Cleaning up on host1.acme.com ***
CASCU050E Failed to send data (Reason=[send failed], Rc=[111]).
CASCU050E Failed to send data (Reason=[send failed], Rc=[111]).
CASCU050E Failed to send data (Reason=[send failed], Rc=[111]).
CASCU050E Failed to send data (Reason=[send failed], Rc=[111]).
CASCU050E Failed to send data (Reason=[send failed], Rc=[111]).
Last login: Fri Aug 6 22:55:45 MST 2021
Related
I found a similar question here, but the answer in such a question didn't work for me.
I am trying to connect the remote ssh server via ruby using Net::SSH.
It is working fine for me for all the commands provided via script and I could read the output of the command successfully.
But when I use the below command it is getting stuck in SSH.exec!(cmd) and control is not returned from the line. Only if i click Ctrl+c in command line the script is getting ended.
sudo -S su root -c 'cockroach start --advertise-addr=34.207.235.139:26257 --certs-dir=/home/ubuntu/certs --store=node0015 --listen-addr=172.31.17.244:26257 --http-addr=172.31.17.244:8080 --join=34.207.235.139:26257 --background --max-sql-memory=.25 --cache=.25;'
This is the script I run from a SSH terminal with no issue:
sudo -S su root -c 'pkill cockroach'
sudo -S su root -c '
cd ~;
mv /home/ubuntu/certs /home/ubuntu/certs.back.back;
mkdir /home/ubuntu/certs;
mkdir -p /home/ubuntu/my-safe-directory;
cockroach cert create-ca --allow-ca-key-reuse --certs-dir=/home/ubuntu/certs --ca-key=/home/ubuntu/my-safe-directory/ca.key;
cockroach cert create-node localhost 34.207.235.139 172.31.17.244 $(hostname) --certs-dir /home/ubuntu/certs --ca-key /home/ubuntu/my-safe-directory/ca.key;
cockroach cert create-client root --certs-dir=/home/ubuntu/certs --ca-key=/home/ubuntu/my-safe-directory/ca.key;
'
sudo -S su root -c 'cockroach start --advertise-addr=34.207.235.139:26257 --certs-dir=/home/ubuntu/certs --store=node0015 --listen-addr=172.31.17.244:26257 --http-addr=172.31.17.244:8080 --join=34.207.235.139:26257 --background --max-sql-memory=.25 --cache=.25;'
This is the Ruby script who attempts to do exactly the same, but it gets stuck:
require 'net/ssh'
ssh = Net::SSH.start('34.207.235.139', 'ubuntu', :keys => './plank.pem', :port => 22)
s = "sudo -S su root -c 'pkill cockroach'"
print "#{s}... "
puts ssh.exec!(s)
s = "sudo -S su root -c '
cd ~;
mv /home/ubuntu/certs /home/ubuntu/certs.back.#{rand(1000000)}};
mkdir /home/ubuntu/certs;
mkdir -p /home/ubuntu/my-safe-directory;
cockroach cert create-ca --allow-ca-key-reuse --certs-dir=/home/ubuntu/certs --ca-key=/home/ubuntu/my-safe-directory/ca.key;
cockroach cert create-node localhost 34.207.235.139 172.31.17.244 $(hostname) --certs-dir /home/ubuntu/certs --ca-key /home/ubuntu/my-safe-directory/ca.key;
cockroach cert create-client root --certs-dir=/home/ubuntu/certs --ca-key=/home/ubuntu/my-safe-directory/ca.key;
'"
print "Installing SSL certifications... "
puts "done (#{ssh.exec!(s)})"
s = "sudo -S su root -c 'cockroach start --advertise-addr=34.207.235.139:26257 --certs-dir=/home/ubuntu/certs --store=node0015 --listen-addr=172.31.17.244:26257 --http-addr=172.31.17.244:8080 --join=34.207.235.139:26257 --background --max-sql-memory=.25 --cache=.25;'"
print "Running start command... "
puts "done (#{ssh.exec!(s)})"
# Use this command to verify the node is running:
# ps ax | grep cockroach | grep -v grep
s = "ps ax | grep cockroach | grep -v grep"
print "#{s}... "
sleep(10)
puts "done (#{ssh.exec!(s)})"
ssh.close
exit(0)
Here is the put put of the ruby script:
C:\code2\blackstack-deployer\examples>ruby start-crdb-environment.rb
sudo -S su root -c 'pkill cockroach'...
Installing SSL certifications... done ()
Running start command...
As you can see, the command gets stuck in the line Running start command...
I tried putting the command in the background:
s = "sudo -S su root -c 'cockroach start --advertise-addr=34.207.235.139:26257 --certs-dir=/home/ubuntu/certs --store=node0015 --listen-addr=172.31.17.244:26257 --http-addr=172.31.17.244:8080 --join=34.207.235.139:26257 --background --max-sql-memory=.25 --cache=.25 &'"
print "Running start command... "
puts "done (#{ssh.exec!(s)})"
but what happned is that the cockroach process never starts (the ps ax | grep cockroach | grep -v grep returns nothing)
I figured out how to fix it.
I added > /dev/null 2>&1 at the end of the command, and it worked.
cockroach start --background --max-sql-memory=.25 --cache=.25 --advertise-addr=%net_remote_ip%:%crdb_database_port% --certs-dir=%crdb_database_certs_path%/certs --store=%name% --listen-addr=%eth0_ip%:%crdb_database_port% --http-addr=%eth0_ip%:%crdb_dashboard_port% --join=%net_remote_ip%:%crdb_database_port% > /dev/null 2>&1;
Are there any logs/output in the cockroach-data/logs directory (should be located where you are running the start command from).
Or perhaps try redirecting stdout+stderr to a file and seeing if there is any output there.
My hypothesis is that the CRDB process isn't starting correctly and so control isn't being returned to the terminal. The cockroachdb docs say that the --background flag only returns control when the crdb process is ready to accept connections. And the question/answer you linked noted that "SSH.exec! will block further execution until the command returns".
I am trying to build a script that will initialize Vault then if not initialized, it will create keys, save them on GCP Secret Manager, via GCE instance bootstrap script. It is failing on the beginning of the if statement with this error startup-script exit status 2. This is my script:
#### Initialize Vault - Token in Clear txt ####
export VAULT_ADDR="http://127.0.0.1:8200"
export VAULT_SKIP_VERIFY=true
until curl -fs -o /dev/null localhost:8200/v1/sys/init; do
echo "Waiting for Vault to start..."
sleep 1
done
init=$(vault operator init -status)
if [ "$init" != "Vault is initialized" ]; then
echo "Initializing Vault"
install -d -m 0755 -o vault -g vault /etc/vault
SECRET_VALUE=$(vault operator init -recovery-shares=1 -recovery-threshold=1 | tee /etc/vault/vault-init.txt)
echo "Storing vault init values in secrets manager"
gcloud secrets create vault-secrets --replication-policy="automatic"
echo -n "$${SECRET_VALUE}" | gcloud secrets versions add vault-secrets --data-file=-
else
echo "Vault is already initialized"
exit 0
fi
This is a snippet of the syslog:
Mar 11 14:50:45 private-mesh-vault-cluster-rlnq startup-script: + init='Vault is not initialized'
Mar 11 14:50:45 private-mesh-vault-cluster-rlnq google_metadata_script_runner[524]: startup-script exit status 2
What is wrong with my script?
I need to script a way to do the following (note all is done on the local machine as root):
runuser -l user1 -c 'ssh localhost' &
runuser -l user1 -c 'systemctl --user list-units'
The first command should be run as root. The end goal is to log in as "user1" so that if any user ran who "user1" will appear in this list. Noticed how the first command is backgrounded before the next command is run.
The next command should be run as root as well, NOT user1.
Problem: These commands run fine when run separately, but when run in a script "user1" never appears to show up when running who. Here is my script
#!/bin/bash
echo "[+] Becoming user1"
runuser -l user1 -c 'ssh -q localhost 2>/dev/null' &
echo
sleep 1
echo "[+] Running systemctl --user commands as root."
runuser -l user 1 -c 'systemctl --user list-units'
echo "[+] Killing active ssh sessions."
kill $(ps aux | grep ssh | grep "^user1.*" | grep localhost | awk '{print$2}') 2>/dev/null
echo "[+] Done."
When running the script it looks like the script is able to ssh into the system but who does not show the user logged in, nor do any ps aux output show a ssh session. Note: I commented out the kill line to confirm if the process stays, which I do not see it at all.
How do I make the bash script fork two processes. Process 1 goal is to login as "user1" and wait. Then process 2 is to perform commands as root while user1 is logged in?
My goal is to run systemctl --user commands as root via script. If your familiar with systemctl --user domain, there is no way to manage systemctl --user units, without the user being logged in via traditional methods (ssh, direct terminal, or gui). I cannot "su - user1" as root either. So I want to force an ssh session as root to the vdns11 user via runuser commands. Once the user is authenticated and shows up via who I can run systemctl --user commands. How can I keep the ssh session active in my code?
With this additional info, the question essentially boils down to 'How can I start and background an interactive ssh session?'.
You could use script for that. It can be used to trick applications into thinking they are being run interactively:
echo "[+] Starting SSH session in background"
runuser -l user1 -c "script -c 'ssh localhost'" &>/dev/null &
pid=$!
...
echo "[+] Killing active SSH session"
kill ${pid}
Original answer before OP provided additional details (for future reference):
Let's dissect what is going on here.
I assume you start your script as root:
echo "[+] Becoming user1"
runuser -l user1 -c 'ssh -q localhost 2>/dev/null' &
So root runs runuser -l user1 -c '...', which itself runs ssh -q localhost 2>/dev/null as user1. All this takes place in the background due to &.
ssh will print Pseudo-terminal will not be allocated because stdin is not a terminal. (hidden due to 2>/dev/null) and immediately exit. That's why you don't see anything when running who or when running ps.
Your echo says [+] Becoming user1, which is quite different from what's happening.
sleep 1
The script sleeps for a second. Nothing wrong with that.
echo "[+] Running systemctl --user commands as root."
#runuser -l user 1 -c 'systemctl --user list-units'
# ^ typo!
runuser -l user1 -c 'systemctl --user list-units'
Ignoring the typo, root again runs runuser, which itself runs systemctl --user list-units as user1 this time.
Your echo says [+] Running systemctl --user commands as root., but actually you are running systemctl --user list-units as user1 as explained above.
echo "[+] Killing active ssh sessions."
kill $(ps aux | grep ssh | grep "^user1.*" | grep localhost | awk '{print$2}') 2>/dev/null
This would kill the ssh process that had been started at the beginning of the script, but it already exited, so this does nothing. As a side note, this could be accomplished a lot easier:
echo "[+] Becoming user1"
runuser -l user1 -c 'ssh -q localhost 2>/dev/null' &
pid=$!
...
echo "[+] Killing active ssh sessions."
kill $(pgrep -P $pid)
So this should give you a better understanding about what the script actually does, but between the goals you described and the conflicting echos within the script it's really hard to figure out where this is supposed to be going.
I have a bashscript which checks if a file exists on the remote server.
When i execute this bashscript on commandline it works fine and say to me the file exist (as it should). But when crontab is executing this bashscript it says that the file not exist (although it would exist
).
can anybody help me?
myscript.sh
#!/bin/bash
if $(sudo ssh -i <path/to/ssh/keys> <user>#<ip> "[[ -f /etc/ssl/file.txt ]]");then
echo "exist"
else
echo "not exist"
fi
crontab:
*/1 * * * * bash /home/user/myscript.sh | mail -s "betreff" user#email.com
stderr: (when i run the script on the commandline)
++ sudo ssh -i <path/to/ssh/keys> <user>#<ip> '[ -f /etc/ssl/file.txt ]'
+ echo exist
exist
stderr: (when i run the script in cron)
++ sudo ssh -i <path/to/ssh/key> <user>#<ip> '[ -f /etc/ssl/file.txt ]'
Warning: Identity file /root/.ssh/key/keyfile not accessible: No such file or directory.
Permission denied, please try again.
Permission denied, please try again.
root#<ip>: Permission denied (publickey,password).
Permission of ssh keyfile:
-rw------- 1 root root 3243 Sep 30 15:34 keyfile
-rw-r--r-- 1 root root 741 Sep 30 15:34 keyfile.pub
Thanks for helping :D
I'm trying to log everything happening during an ssh session while showing output on shell.
sshpass -p "password" ssh -tt -o ConnectTimeout=10 -oStrictHostKeyChecking=no username#"$terminal" 'bash -s' < libs/debug-mon.lib "$function" | grep -E '^INFO:|^WARNING:' || echo "WARNING: Terminal not reacheable or wrong IP" | tee -a libs/debug-monitor-logs
I'm not getting anything on the log libs/debug-monitor-logs file
Could you please help me to see where the issue is?
Thanks
looks like this only thing you will ever write into the log file is "WARNING: Terminal not reacheable or wrong IP"
try something like this
(command-that-might-fail || echo error message) | tee -a log-file
instead of
commant-that-might-fail || echo error message | tee -a log-file
(put the whole expression in brackets that you want to pipe into tee)