Shell script - exiting before completion - bash

I have made a short shell script which launches a VM, sleeps some time to allow the VM to boot and then mounts a share at the VM on the host computer:
#!/bin/bash
nohup VBoxManage startvm "Ubuntu server" --type headless &&
sleep 60 &&
sudo mount -t cifs //192.168.1.1/www /media/ubuntuserver/
The VM is started properly and the script sleeps but no mount occurs and the script seems to just exit instead. What am I doing wrong?

is your sudo mount working in non-interactive mode? make sure this command is not asking any password
Add some logging so that you know what output is being returned
#!/bin/bash
nohup VBoxManage startvm "Ubuntu server" --type headless 2>&1 >> ~/script_log.txt &&
sleep 60 2>&1 >> ~/script_log.txt &&
sudo mount -t cifs //192.168.1.1/www /media/ubuntuserver/ 2>&1 >> ~/script_log.txt
replace ~/script_log.txt with any suitable log file path

Related

Prevent .bash_profile from executing when connecting via SSH

I have several servers running Ubuntu 18.04.3 LTS. Although it's considered bad practice to auto login, I understand the risks.
I've done the following to auto-login the user:
sudo mkdir /etc/systemd/system/getty#tty1.service.d
sudo nano /etc/systemd/system/getty#tty1.service.d/override.conf
Then I add the following to the file:
[Service]
ExecStart=
ExecStart=-/sbin/agetty --noissue --autologin my_user %I $TERM
Type=idle
Then, I edit the following file for the user to be able to automatically start a program:
sudo nano /home/my_user/.bash_profile
# Add this to the file:
cd /home/my_user/my_program
sudo ./program
This works great on the console when the server starts, however, when I SSH into the server, the same program is started and I don't want that.
The simplest solution is to SSH with a different user but is there a way to prevent the program from running when I SSH in using the same user?
The easy approach is to check the environment for variables ssh sets; there are several.
# only run my_program on login if not connecting via ssh
if [ -z "$SSH_CLIENT" ]; then
cd /home/my_user/my_program && sudo ./program
fi

Shutdown or Reboot a WSL session from inside the WSL session

I would like to be able to reboot WSL sessions. To do so is a little awkward as WSL does not use systemd so we cannot use reboot. Within a WSL session, we can run any Windows executable:
boss#Asus: ~ $ wsl.exe -l -v
NAME STATE VERSION
* Ubuntu-20.04 Running 2
fedoraremix Stopped 1
Alpine Stopped 1
Ubuntu Stopped 1
Therefore, we can use wsl.exe (you have to make sure to always add .exe when calling Windows commands or they will not work) to shutdown the currently running WSL session wsl.exe -t Ubuntu-20.03, but the problem is that I don't know the session name.
When we are inside a WSL session, hostname is something different, and so I don't know how to find the name of the currently running session that I am inside (maybe a Windows process command that tells me what process I am running from??).
Ideally, I would like a command to equate to a reboot. I guess this would have to look something like:
Run an asynchronous command that will initiate a new session 5-10 seconds in the future to allow the previous session to fully shutdown (and that will not terminate when this session is terminated).
Terminate the currently running session with wsl.exe -t <my found name>.
A few seconds later, the new session will start up.
Credits to the commenters above.
To shutdown a session from within a WSL guest, you can run:
wsl.exe --terminate $WSL_DISTRO_NAME
Rebooting is also possible, however so far I do not know how to get a new terminal inside the same console window. The following will reboot the WSL guest and open a new console window of it when it has finished:
cd /mnt/c/ && cmd.exe /c start "rebooting WSL" cmd /c "timeout 5 && wsl -d $WSL_DISTRO_NAME" && wsl.exe --terminate $WSL_DISTRO_NAME
Explanation:
From the perspective of Windows, WSL systems are mounted as network resources. cmd does not support the resulting UNC path formats such as \\wsl$\Debian\<...>. Therefore it may be best to cd to a directory it can resolve as a Windows path instead, such as C:\, before it is executed. If ommited, cmd will complain and change its directory to cmd's %windir%.
&& runs another command after the previous one has finished in linux and windows cmd.
cmd.exe /c starts a cmd instance and tells it to execute a command that follows.
start "<WindowTitle>" ... is a cmd-internal command to run another program inside its own window, independent of the cmd instance. In this case the program is another cmd window. It will not be affected when WSL shuts down.
In the original Linux-Terminal, the first cmd /c command has finished, and the third command after && shuts down the guest like above.
The second cmd window waits for a few seconds, then starts a new WSL session of the same WSL machine.
Creating an Alias
You can make this easier to use by creating an alias. For bash users, edit your ~/.bashrc file and apply the changes afterwards:
nano ~/.bashrc && source ~/.bashrc
Add either or both of the lines below anywhere in the file.
You can of course choose any name you want. Both shutdown and reboot exist as systemd commands, but since they do not work in WSL machines, you can replace them with an alias as follows:
alias shutdown='wsl.exe --terminate $WSL_DISTRO_NAME'
alias reboot='cd /mnt/c/ && cmd.exe /c start "rebooting WSL" cmd /c "timeout 5 && wsl -d $WSL_DISTRO_NAME" && wsl.exe --terminate $WSL_DISTRO_NAME'
Expanding on the ansver from #BasementScience
To manage a remote WSL I've set up a Windows TaskScheduler job to start wsl with an /etc/init-wsl which in turn starts cron, ssh, rsyslog and autossh (so I can connect back to WSL).
So naturally I'm keen to get these processes started also on a remote WSL reboot, so I'm able to login again afterwards.
# Added to $HOME/.bashrc - Renamed aliases to separate from OS commands
alias wslshutdown='wsl.exe --terminate $WSL_DISTRO_NAME'
alias wslreboot='cd /mnt/c/ && cmd.exe /c start "Rebooting WSL" cmd /c "timeout 5 && wsl -d $WSL_DISTRO_NAME" -- sudo /etc/init-wsl && wsl.exe --terminate $WSL_DISTRO_NAME'
The detail is here ... & wsl -d $WSL_DISTRO_NAME" -- sudo /etc/init-wsl & ...
This will not start a new shell, but will start my processes so I can login again.
The /etc/init-wsl script have to be created:
sudo touch /etc/init-wsl && sudo chmod 755 /etc/init-wsl
# Add services as needed
sudo bash -c 'cat << EOF > /etc/init-wsl
service ssh start
service cron start
EOF'
# Ensure your user (the %sudo group) can sudo the init script without password
sudo bash -c 'cat << EOF > /etc/sudoers.d/user-service
%sudo ALL=NOPASSWD: /usr/sbin/service *
%sudo ALL=NOPASSWD: /etc/init-wsl
EOF'

Why my command is not working with nohup and SSH?

I'm restarting the server from my local mashine using the following command:
ssh -l root -p 22 $SERVER_HOST "cd $SERVER_DIR && nohup bin/restart &"
And it's not working, and prints nothing so I don't know what's the problem. But if I remove nohup and & - it's working. Why, and how to make it work (and continue in background after terminating ssh)?
Version without nohup works, but blocks the shell (it also prints output from the bin/restart script unlike the version with nohup). But I can't use it as I need the server to continue to work in background.
ssh -l root -p $SERVER_PORT $SERVER_HOST "cd $SERVER_DIR && bin/restart"
If that matter, the content of the bin/restart script (restarting Ruby on Rails app)
. /root/.asdf/asdf.sh
killall -r ruby
RAILS_ENV=production bundle exec rails server
What worked for me is:
ssh -tt -l root -p 22 $SERVER_HOST 'cd $SERVER_DIR && nohup bin/restart & sleep 1'
Tested with OpenSSH 7.2p2 (client & server), bash 4.3, Linux 4.15.
-tt forces ssh to allocate a terminal, which is apparently important for nohup to work.
sleep 1 is not special, you just need something that will force the shell to context-switch, it could also be /bin/true, but that isn't as sure-fire.

Raspbian - Transmission torrents don't start after rebooting

I wrote a bash file and put in cron to start my torrents (on web interface) automatically after rebooting the system, but nothing happens.
The crontab -e
#reboot bash /home/pi/torrent.sh >> /home/pi/torrent.log 2>&1
torrent.sh
echo
date
sudo service transmission-daemon start
sleep 10
sudo transmission-remote -t all -s
sleep 1
The torrent.sh has got all permissions.
P.S.: If I run the script from terminal my torrents start normally.
Hope you can help!

bash script execute commands after ssh

I am trying to execute a few commands via my first script but it's not working.
#!/bin/bash
#connect to server
echo "Connecting to the server..."
ssh -t root#IP '
#switch user to deploy
su - deploy
#switch path
echo "Switching the path"
cd /var/www/deploys/bin/app/config
#run deploy script
echo "Running deploy script"
/usr/local/bin/cap -S env=prod deploy
#restart apache
sudo /bin/systemctl restart httpd.service
bash -l
'
What is happening? I am successfully connected to the server, the user is changed and then I don't see nothing happening. When I press ctrl + c just like that in terminal, some output from the command that should be executed appears but there are some errors.
Why I don't see everything what is happening in terminal after launching the script? Am I doing it the wrong way?
BTW: when I try connect manually and run the commands myself, everything is working nicely.
Using CentOS 7.
Clean way to login through ssh and excecute a set of commands is
ssh user#ip << EOF
#some commands
EOF
here EOF acts as the delimitter for the command list
the script can be modified as
ssh -t root#IP << EOF
#switch user to deploy
su - deploy
#switch path
echo "Switching the path"
cd /var/www/deploys/bin/app/config
#run deploy script
echo "Running deploy script"
/usr/local/bin/cap -S env=prod deploy
#restart apache
sudo /bin/systemctl restart httpd.service
bash -l
EOF
will excecutes the command and closes the connection there after

Resources