Shutdown or Reboot a WSL session from inside the WSL session - bash

I would like to be able to reboot WSL sessions. To do so is a little awkward as WSL does not use systemd so we cannot use reboot. Within a WSL session, we can run any Windows executable:
boss#Asus: ~ $ wsl.exe -l -v
NAME STATE VERSION
* Ubuntu-20.04 Running 2
fedoraremix Stopped 1
Alpine Stopped 1
Ubuntu Stopped 1
Therefore, we can use wsl.exe (you have to make sure to always add .exe when calling Windows commands or they will not work) to shutdown the currently running WSL session wsl.exe -t Ubuntu-20.03, but the problem is that I don't know the session name.
When we are inside a WSL session, hostname is something different, and so I don't know how to find the name of the currently running session that I am inside (maybe a Windows process command that tells me what process I am running from??).
Ideally, I would like a command to equate to a reboot. I guess this would have to look something like:
Run an asynchronous command that will initiate a new session 5-10 seconds in the future to allow the previous session to fully shutdown (and that will not terminate when this session is terminated).
Terminate the currently running session with wsl.exe -t <my found name>.
A few seconds later, the new session will start up.

Credits to the commenters above.
To shutdown a session from within a WSL guest, you can run:
wsl.exe --terminate $WSL_DISTRO_NAME
Rebooting is also possible, however so far I do not know how to get a new terminal inside the same console window. The following will reboot the WSL guest and open a new console window of it when it has finished:
cd /mnt/c/ && cmd.exe /c start "rebooting WSL" cmd /c "timeout 5 && wsl -d $WSL_DISTRO_NAME" && wsl.exe --terminate $WSL_DISTRO_NAME
Explanation:
From the perspective of Windows, WSL systems are mounted as network resources. cmd does not support the resulting UNC path formats such as \\wsl$\Debian\<...>. Therefore it may be best to cd to a directory it can resolve as a Windows path instead, such as C:\, before it is executed. If ommited, cmd will complain and change its directory to cmd's %windir%.
&& runs another command after the previous one has finished in linux and windows cmd.
cmd.exe /c starts a cmd instance and tells it to execute a command that follows.
start "<WindowTitle>" ... is a cmd-internal command to run another program inside its own window, independent of the cmd instance. In this case the program is another cmd window. It will not be affected when WSL shuts down.
In the original Linux-Terminal, the first cmd /c command has finished, and the third command after && shuts down the guest like above.
The second cmd window waits for a few seconds, then starts a new WSL session of the same WSL machine.
Creating an Alias
You can make this easier to use by creating an alias. For bash users, edit your ~/.bashrc file and apply the changes afterwards:
nano ~/.bashrc && source ~/.bashrc
Add either or both of the lines below anywhere in the file.
You can of course choose any name you want. Both shutdown and reboot exist as systemd commands, but since they do not work in WSL machines, you can replace them with an alias as follows:
alias shutdown='wsl.exe --terminate $WSL_DISTRO_NAME'
alias reboot='cd /mnt/c/ && cmd.exe /c start "rebooting WSL" cmd /c "timeout 5 && wsl -d $WSL_DISTRO_NAME" && wsl.exe --terminate $WSL_DISTRO_NAME'

Expanding on the ansver from #BasementScience
To manage a remote WSL I've set up a Windows TaskScheduler job to start wsl with an /etc/init-wsl which in turn starts cron, ssh, rsyslog and autossh (so I can connect back to WSL).
So naturally I'm keen to get these processes started also on a remote WSL reboot, so I'm able to login again afterwards.
# Added to $HOME/.bashrc - Renamed aliases to separate from OS commands
alias wslshutdown='wsl.exe --terminate $WSL_DISTRO_NAME'
alias wslreboot='cd /mnt/c/ && cmd.exe /c start "Rebooting WSL" cmd /c "timeout 5 && wsl -d $WSL_DISTRO_NAME" -- sudo /etc/init-wsl && wsl.exe --terminate $WSL_DISTRO_NAME'
The detail is here ... & wsl -d $WSL_DISTRO_NAME" -- sudo /etc/init-wsl & ...
This will not start a new shell, but will start my processes so I can login again.
The /etc/init-wsl script have to be created:
sudo touch /etc/init-wsl && sudo chmod 755 /etc/init-wsl
# Add services as needed
sudo bash -c 'cat << EOF > /etc/init-wsl
service ssh start
service cron start
EOF'
# Ensure your user (the %sudo group) can sudo the init script without password
sudo bash -c 'cat << EOF > /etc/sudoers.d/user-service
%sudo ALL=NOPASSWD: /usr/sbin/service *
%sudo ALL=NOPASSWD: /etc/init-wsl
EOF'

Related

Run a job in a raspberry pi

I have written some simple python scripts and I would like to run them on my raspberry pi even when I am not logged in. So far, I can log in to the machine via ssh and run the script without a problem. As soon as I have to disconnect the ssh session, I notice that the script breaks or stops. Thus, I was wondering if there is a way to keep the script running after the end of the ssh connection.
Here is the system I am using: Raspberry pi 3B+ with ubuntu 22.04 LTS, and here is how I run my script:
ssh xx#xxx.xxx.xxx.xxx
cd myapp/
python3 runapp.py
You can use nohup to stop hangup signals affecting your process. Here are three different ways of doing it.
This is a "single shot" command, I mean you type it all in one go:
ssh SOMEHOST "cd SOMEWHERE && nohup python3 SOMESCRIPT &"
Or here, you log in, change directory and get a new prompt in the remote host, run some commands and then, at some point, exit the ssh session:
ssh SOMEHOST
cd SOMEWHERE
nohup python SOMESCRIPT &
exit
Or, this is another "single shot" where you won't get another prompt till you type EOF
ssh SOMEHOST <<EOF
cd SOMEWHERE
nohup python SOMESCRIPT &
EOF
if there is a way to keep the script running after the end of the ssh connection.
Just run it in the background.
python3 runapp.py &
You could store the logs to system log.
python3 runapp.py | logger &
You could learn about screen and tmux virtual terminals, so that you can view the logs later. You could start a tmux session, run the command inside and detach the session.
You could setup a systemd service file with the program, and run is "as a service".
If atd is running you can use at to schedule commands to be executed at a particular time.
Examples:
$ echo "python3 /path/to/runapp.py"|at 11:00
job 10 at Fri Jun 3 11:00:00 2022
$ echo "python3 /path/to/runapp.py" | at now
job 11 at Thu Jun 2 19:57:00 2022
# after minutes/hours/days/....
$ echo "python3 /path/to/runapp.py" | at now +5 minutes
$ echo "python3 /path/to/runapp.py" | at now +2 hours
$ ssh user#host "echo 'python3 /path/to/runapp.py'| at now"
Jobs created with at are executed only once.

A Bash script which runs commands in two terminal / ssh sessions

I'm trying to automate setting up and configuring a vagrant process with a bash script.
The thing is, I need to to ssh into my vagrant machine twice, and I want both terminals to be visible on my screen whilst doing this.
The process is like so...
In terminal 1:
vagrant up
vagrant ssh myhost
wait
cd /my/directory/
... do some commands...
Then I want this terminal to persist / stay open, and a new tab to open where another vagrant session starts
wait
cd /my/other/directory
.... do some commands...
I've got the script working for the first vagrant/terminal session and stored in my /bin/ directory, but how do I add the second?
How it looks exactly depends on the terminal emulator, but the basic pattern could be as follows:
First script (script1.sh)
vagrant up
vagrant ssh myhost
wait
cd /my/directory/
xterm -e script2.sh &
... do some commands...
Second script (script2.sh)
wait
cd /my/other/directory
.... do some commands...
The trick is to open another terminal window from the first script (for xterm its xterm -e).
In case you are interested in a way that works indepedently of the terminal emulator, consider using tmux (terminal multiplexer).
Other general hint: It is generally not recommended to store locally-created scripts under /bin. A more common place would be /usr/local/bin or $HOME/bin (although $HOME/bin might need to be configured separately).

Run shutdown command inside bash script

I'm trying to make an executable file (bash script) to show me a notification and shutdown my computer when a process is not found.
I will run the script as a Startup Application and I'm using the notify-send and shutdown commands in this script.
The problem is:
(1) If I add myfolder/myscript to the Startup Applications list it can't run the shutdown command (root password is required for this)
(2) If I add the script sudo myfolder/myscript it can't show the notifications via notify-send application.
I've already done a lot of searching around the internet and tried these steps:
(1) Added the script path or /sbin/shutdown to the sudores via sudo visudo
(2) Added su - $USER -c "DISPLAY=$DISPLAY DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/$UID/bus before notify-send command (I found some users reporting that root can't send notifications)`
So... none of them worked. What I'm missing?
What can be done to display notifications AND shutdown?
Here is my code:
#!/bin/bash
#Search for a specific process and sleep if it is found (removed for space saving)
shut_time=$(date --date='10 minutes' +"%T")
notify-send -t 600000 "WARNING:
Program is not running.
Shutting down in 10 minutes (scheduled for $shut_time)."
#ALREADY TESTED BELLOW LINES (DON'T WORK)
#su - $USER -c "DISPLAY=$DISPLAY DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/$UID/bus notify-send -t 600000 'WARNING:
#Program is not running.
#Shutting down in 10 minutes.'"
sudo /sbin/shutdown -h +10 #Tried with our without sudo
I'm running MX Linux 18 (xfce, Debian based).
To execute a terminal or any commands even another bash script within a BASH SCRIPT, all you have to do is simply start with a dollar sign and enclose the whole commands and arguments if any with PARENTHESES as follows.
$(COMMANDS)
In his case, it would be
$(sudo shutdown 10)
The statement above will EXECUTE the SHUTDOWN command for 10 minutes system shutdown and spit out the actual date and time the system will be automatically shutdown just like you would run this command in a console. There is no need to turn the user into sudoer or superuser. Whenever he runs his bash script as a ROOT USER or using SUDO, he will be PROMPTED to enter the root password. That's all he has to do and the above command will be executed.
Plus, if there is ever a need to capture the output of any command or script, do as follow.
my_shuttime=$(sudo shutdown 10)
I think it lacks an entry for shutdown in the sudoers. Please create a file sudo under /etc/sudoers.d and make the following entry:
[YOURUSER] ALL = (ALL) NOPASSWD: /sbin/shutdown
Replace [YOURUSER] with your user account!

Raspbian - Transmission torrents don't start after rebooting

I wrote a bash file and put in cron to start my torrents (on web interface) automatically after rebooting the system, but nothing happens.
The crontab -e
#reboot bash /home/pi/torrent.sh >> /home/pi/torrent.log 2>&1
torrent.sh
echo
date
sudo service transmission-daemon start
sleep 10
sudo transmission-remote -t all -s
sleep 1
The torrent.sh has got all permissions.
P.S.: If I run the script from terminal my torrents start normally.
Hope you can help!

run ssh script into ubuntu instance do something, when exit, stay in ubuntu

I am running a very simple script that will ssh into a remote ubuntu instance, move around the directory structure execute a few things, then I want the prompt to stay in Ubuntu. When the script ends, in ends back at the local prompt. How do I make modify the script so that it finishes with the remote prompt?
local$ ssh -i xxx.pem ubuntu#xxx.ap-region.compute.amazonaws.com \
"cd virtualenv; ls -lh;"
There are two things needed to be added to your commandline:
The bash command in the end starts the bash shell (you can start any other you want)
The -t switch will make sure the remote server will allocate you TTY and your shell will work as expected:
local$ ssh -t -i xxx.pem ubuntu#xxx.ap-region.compute.amazonaws.com \
"cd virtualenv; ls -lh; bash"

Resources