For some instance types in AWS, there is instance store 0 available which you can select during launch instance wizard at 'add storage'. I found that this instance store 0 needs to be reformatted and remounted every time instance starts. However, once mounted, the mount remains intact after a reboot but not for instance start. I execute following code via /etc/rc.local which basically serves the purpose.
#!/usr/bin/bash
FSTYPE=xfs
DEVICE=/dev/xvdb
mkfs -t $FSTYPE $DEVICE
umount -f /mnt > /dev/null 2>&1
mount $DEVICE /mnt
chmod 777 /mnt
chmod +t /mnt
exit
But rc.local gets executed at reboot as well as during startup. Is there an elaborate way in Centos 7 wherein this script runs only during instance startup and not during reboots?
Related
I would like to be able to reboot WSL sessions. To do so is a little awkward as WSL does not use systemd so we cannot use reboot. Within a WSL session, we can run any Windows executable:
boss#Asus: ~ $ wsl.exe -l -v
NAME STATE VERSION
* Ubuntu-20.04 Running 2
fedoraremix Stopped 1
Alpine Stopped 1
Ubuntu Stopped 1
Therefore, we can use wsl.exe (you have to make sure to always add .exe when calling Windows commands or they will not work) to shutdown the currently running WSL session wsl.exe -t Ubuntu-20.03, but the problem is that I don't know the session name.
When we are inside a WSL session, hostname is something different, and so I don't know how to find the name of the currently running session that I am inside (maybe a Windows process command that tells me what process I am running from??).
Ideally, I would like a command to equate to a reboot. I guess this would have to look something like:
Run an asynchronous command that will initiate a new session 5-10 seconds in the future to allow the previous session to fully shutdown (and that will not terminate when this session is terminated).
Terminate the currently running session with wsl.exe -t <my found name>.
A few seconds later, the new session will start up.
Credits to the commenters above.
To shutdown a session from within a WSL guest, you can run:
wsl.exe --terminate $WSL_DISTRO_NAME
Rebooting is also possible, however so far I do not know how to get a new terminal inside the same console window. The following will reboot the WSL guest and open a new console window of it when it has finished:
cd /mnt/c/ && cmd.exe /c start "rebooting WSL" cmd /c "timeout 5 && wsl -d $WSL_DISTRO_NAME" && wsl.exe --terminate $WSL_DISTRO_NAME
Explanation:
From the perspective of Windows, WSL systems are mounted as network resources. cmd does not support the resulting UNC path formats such as \\wsl$\Debian\<...>. Therefore it may be best to cd to a directory it can resolve as a Windows path instead, such as C:\, before it is executed. If ommited, cmd will complain and change its directory to cmd's %windir%.
&& runs another command after the previous one has finished in linux and windows cmd.
cmd.exe /c starts a cmd instance and tells it to execute a command that follows.
start "<WindowTitle>" ... is a cmd-internal command to run another program inside its own window, independent of the cmd instance. In this case the program is another cmd window. It will not be affected when WSL shuts down.
In the original Linux-Terminal, the first cmd /c command has finished, and the third command after && shuts down the guest like above.
The second cmd window waits for a few seconds, then starts a new WSL session of the same WSL machine.
Creating an Alias
You can make this easier to use by creating an alias. For bash users, edit your ~/.bashrc file and apply the changes afterwards:
nano ~/.bashrc && source ~/.bashrc
Add either or both of the lines below anywhere in the file.
You can of course choose any name you want. Both shutdown and reboot exist as systemd commands, but since they do not work in WSL machines, you can replace them with an alias as follows:
alias shutdown='wsl.exe --terminate $WSL_DISTRO_NAME'
alias reboot='cd /mnt/c/ && cmd.exe /c start "rebooting WSL" cmd /c "timeout 5 && wsl -d $WSL_DISTRO_NAME" && wsl.exe --terminate $WSL_DISTRO_NAME'
Expanding on the ansver from #BasementScience
To manage a remote WSL I've set up a Windows TaskScheduler job to start wsl with an /etc/init-wsl which in turn starts cron, ssh, rsyslog and autossh (so I can connect back to WSL).
So naturally I'm keen to get these processes started also on a remote WSL reboot, so I'm able to login again afterwards.
# Added to $HOME/.bashrc - Renamed aliases to separate from OS commands
alias wslshutdown='wsl.exe --terminate $WSL_DISTRO_NAME'
alias wslreboot='cd /mnt/c/ && cmd.exe /c start "Rebooting WSL" cmd /c "timeout 5 && wsl -d $WSL_DISTRO_NAME" -- sudo /etc/init-wsl && wsl.exe --terminate $WSL_DISTRO_NAME'
The detail is here ... & wsl -d $WSL_DISTRO_NAME" -- sudo /etc/init-wsl & ...
This will not start a new shell, but will start my processes so I can login again.
The /etc/init-wsl script have to be created:
sudo touch /etc/init-wsl && sudo chmod 755 /etc/init-wsl
# Add services as needed
sudo bash -c 'cat << EOF > /etc/init-wsl
service ssh start
service cron start
EOF'
# Ensure your user (the %sudo group) can sudo the init script without password
sudo bash -c 'cat << EOF > /etc/sudoers.d/user-service
%sudo ALL=NOPASSWD: /usr/sbin/service *
%sudo ALL=NOPASSWD: /etc/init-wsl
EOF'
I'm running boot2docker 1.3 on Win7.
I want to connect a shared folder.
In the VirtualBox Manager under the image properties->shared folders I've added the folder I've want and named it "c/shared". The "auto-mount" and "make permanent" boxes are checked.
When boot2docker boots, it isn't mounted though. I have to do an additional:
sudo mount -t vboxsf c/shared /c/shared
for it to show up.
Since I need that for every time I'll ever use docker, I'd like that to just run on boot, or just already be there. So I thought if there were some startup script I could add, but I can't seem to find where that would be.
Thanks
EDIT: It's yelling at me about this being a duplicate of Boot2Docker on Mac - Accessing Local Files which is a different question. I wanted to mount a folder that wasn't one of the defaults such as /User on OSX or /c/Users on windows. And I'm specifically asking for startup scripts.
/var/lib/boot2docker/bootlocal.sh fits your need probably, it will be run by initial script /opt/bootscripts.sh
And bootscripts.sh will also put the output into the /var/log/bootlocal.log, see segment below (boot2docker 1.3.1 version)
# Allow local HD customisation
if [ -e /var/lib/boot2docker/bootlocal.sh ]; then
/var/lib/boot2docker/bootlocal.sh > /var/log/bootlocal.log 2>&1 &
fi
One use case for me is
I usually put shared directory as /c/Users/larry/shared, then I add script
#/bin/bash
ln -s /c/Users/larry/shared /home/docker/shared
So each time, I can access ~/shared in boot2docker as the same as in host
see FAQ.md (provided by #KCD)
If using boot2docker (Windows) you should do following:
First create shared folder for boot2docker VM:
"C:/Program Files/Oracle/VirtualBox/VBoxManage" sharedfolder add default -name some_shared_folder -hostpath /c/some/path/on/your/windows/box
#Then make this folder automount
docker-machine ssh
vi /var/lib/boot2docker/profile
Add following at the end of profile file:
sudo mkdir /windows_share
sudo mount -t vboxsf some_shared_folder /windows_share
Restart docker-machine
docker-machine restart
Verify that folder content is visible in boot2docker:
docker-machine ssh
ls -al /windows_share
Now you can mount the folder either using docker run or docker-compose.
Eg:
docker run it --rm --volume /windows_share:/windows_share ubuntu /bin/bash
ls -al /windows_share
If changes in the profile file are lost after VM or Windows restart please do following:
1) Edit file C:\Program Files\Docker Toolbox\start.sh and comment out following line:
#line number 44 (or somewhere around that)
yes | "${DOCKER_MACHINE}" regenerate-certs "${VM}"
#change the line above to:
# yes | "${DOCKER_MACHINE}" regenerate-certs "${VM}"
Thanks for your help with this. An additional few flags I needed to add, in order for the new mount to be accessible by the boot2docker "docker" user:
sudo mount -t vboxsf -o umask=0022,gid=50,uid=1000 Ext-HD /Volumes/Ext-HD
With docker 1.3 you do not need to manually mount anymore. Volumes should work properly as long as the source on the host vm is in your user directory.
https://blog.docker.com/2014/10/docker-1-3-signed-images-process-injection-security-options-mac-shared-directories/
I can't make it work following Larry Cai's instruction. I figured I could make changes to "c:\Program Files\Boot2Docker for Windows\start.sh", add below
eval "$(./boot2docker.exe shellinit 2>/dev/null | sed 's,\\,\\\\,g')"
your mount command
eval "$(./boot2docker ssh 'sudo mount -t vboxsf c/shared /c/shared')"
I also add the command to start my container here.
eval "$(docker start KDP)"
I am looking to have the Transmission bittorrent client execute a script that changes the owner and permissions of all torrents in the completed folder when a torrent completes downloading.
I am using the following relevant settings in /etc/transmission-daemon/settings.json:
"download-dir": "/data/transmission/completed",
"script-torrent-done-enabled": true,
"script-torrent-done-filename": "/home/user/script.sh",
The script does not seem to be executing after a torrent completes, I know there are other issues that could be going on aside the from the content of the script itself. The owner of the script file is debian-transmission and I have the permissions set to 777, so there shouldn't be an issues with Transmission accessing the script unless I have missed something here.
The /home/user/script.sh file is as follows:
#!/bin/bash
echo sudopassword | /usr/bin/sudo -S /bin/chmod -f -R 777 /data/transmission/completed
echo sudopassword | /usr/bin/sudo -S /bin/chown -f -R user /data/transmission/completed
I know it is poor form to use a sudo command in this fashion, but I can execute the script on it's own and it will work correctly. I am not sure why Transmission is not executing the script. Transmission supports some environment variables such as TR_TORRENT_NAME that I would like to use once the script is being triggered. Is there anything I am not setting up in the file that would prevent the script from working correctly and how would I use environment variables?
I'll probably answer a different question here, but if you're trying this simply to gain write permissions on your Transmission Daemon's downloads to your user, try a different approach.
I'm running my Transmission Daemon under my username, as set in it's systemd service file. (/etc/systemd/system/multi-user.target.wants/transmission-daemon.service in my case)
[Unit]
Description=Transmission BitTorrent Daemon
After=network.target
[Service]
User=myuser # set user here
Group=mygroup # set group here :)
UMask=0022 # 0022 gives 644 permissions on files (u+w), 0002 gives 644 (g+w), 0000 gives 666 (a+w)
Type=notify
ExecStart=/usr/bin/transmission-daemon -f --log-error
ExecStop=/bin/kill -s STOP $MAINPID
ExecReload=/bin/kill -s HUP $MAINPID
[Install]
WantedBy=multi-user.target
Notice User, Group and UMask (with capital M) directives.
See Execution environment configuration for Systemd manpage.
Then run:
sudo chown -fR user /data/transmission/completed
sudo systemctl daemon-reload
sudo service transmission-daemon restart
and you should set :)
Add the user who will execute the script to a group with default sudo access.
Fedora - add user to the wheel group
sudo usermod -aG wheel $(whoami)
Ubuntu - user group: sudo or admin (deprecated)
I need to run the following set of commands in a shell script
modprobe nbd
sudo qemu-nbd -c /dev/nbd0 path/to/image/file
sudo mount /dev/nbd0p1 /mnt/temp
python copyFiles.py
sudo umount /mnt/temp
sudo qemu-nbd -d /dev/nbd0
sudo rmmod nbd
When I individually run these commands it works fine, but when I put them in a shell script and executed that shell script, I always end up with an error in the mount command.
So I threw in a sleep 1 before mount and it works as expected.
What could be the reason behind this?
(Some sort of asynchronous call registration delay/ race condition?)
mount error: mount point /mnt/temp does not exist
So it seems the directory /mnt/temp doesn't exist when you are running it as a shell script. Just create it or add this in your script somewhere before the mount command:
mkdir /mnt/temp 2>&1 /dev/null
Both mount and the previous command require escalated privileges. Does it error cause the lock is still in place from the previous command when mount tries to run?
I have made a short shell script which launches a VM, sleeps some time to allow the VM to boot and then mounts a share at the VM on the host computer:
#!/bin/bash
nohup VBoxManage startvm "Ubuntu server" --type headless &&
sleep 60 &&
sudo mount -t cifs //192.168.1.1/www /media/ubuntuserver/
The VM is started properly and the script sleeps but no mount occurs and the script seems to just exit instead. What am I doing wrong?
is your sudo mount working in non-interactive mode? make sure this command is not asking any password
Add some logging so that you know what output is being returned
#!/bin/bash
nohup VBoxManage startvm "Ubuntu server" --type headless 2>&1 >> ~/script_log.txt &&
sleep 60 2>&1 >> ~/script_log.txt &&
sudo mount -t cifs //192.168.1.1/www /media/ubuntuserver/ 2>&1 >> ~/script_log.txt
replace ~/script_log.txt with any suitable log file path