Implementing shutdown command from scratch - bash

root ~$ shutdown
-sh: shutdown: not found
root ~$ shutdown -h now
-sh: shutdown: not found
None of the commands are working . I think I need to link the command from scratch. Can anybody guide?
uname -a
gives
2.6.35.3 #49 PREEMPT Wed Jun 11 20:03:43 IST 2014 armv5tejl GNU/Linux

Try to call the command with complete path: /sbin/shutdown
moreover on most systems it is a root only command, so you should call it from root user (I am seeing the $ at the end of command prompt, so I am guessing you are not root)
Other commands you can try are presented here;
or you can use the init 0 command always as root user.
Writing the shutdown code is a last costly resort only if all the other alternatives fail.

Related

systemd does not start service - Failed at step USER spawning

I am trying to write a systemd service script. Its starts with root user creating nonlogin user and gives him privileges. Then the nologin
user starts the application.
I am on rhel-7.5 (Maipo) with Linux-5.0.7-2019.05.28.x86_64. Here is what I tried.
/root/myhome/my_setup.sh:
#!/bin/bash
# Create nologin user with workingdir. Make hime owner for DB files, binary files he runs.
crdb_setup() {
/bin/mkdir -p /var/lib/lsraj /root/crdb || return $?
/usr/bin/getent group lsraj || /usr/sbin/groupadd -g 990 lsraj|| return $?
/usr/bin/getent passwd lsraj || /usr/sbin/useradd -u 990 -g 990 \
-c 'CRDB User' -d /var/lib/lsraj -s /sbin/nologin -M -K UMASK=022 lsraj || return $?
/bin/chown lsraj:lsraj /var/lib/lsraj /root/crdb /root/myhome/cockroach || return $?
}
crdb_setup
[root#lsraj ~]#
total 99896
-rwxr-xr-x 1 root root 102285942 Jun 18 16:54 cockroach
-rwxr-xr-x 1 root root 521 Jun 18 17:07 my_setup.sh
[root#lsraj ~]#
Service script:
[root#lsraj~]# cat /usr/lib/systemd/system/lsraj.service
[Unit]
Description=Cockroach Database Service
After=network.target syslog.target
[Service]
Type=notify
# run the script with root privileges. The script creates user and gives him privileges.
ExecStartPre=+/root/myhome/my_setup.sh
User=lsraj
Group=lsraj
WorkingDirectory=/var/lib/lsraj
ExecStart=/root/myhome/cockroach start --insecure --host=localhost --store=/root/crdb
ExecStop=/root/myhome/cockroach quit --insecure --host=localhost
StandardOutput=journal
Restart=on-failure
RestartSec=60s
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=cockroachdb
[Install]
WantedBy=multi-user.target
[root#lsraj~]#
Jun 18 17:30:51 lsraj systemd: [/usr/lib/systemd/system/lsraj.service:8] Executable path is not absolute, ignoring: +/root/myhome/my_setup.sh
Jun 18 17:30:51 lsraj systemd: Starting Cockroach Database Service...
Jun 18 17:30:51 lsraj systemd: Failed at step USER spawning /root/myhome/cockroach: No such process
Jun 18 17:30:51 lsraj systemd: lsraj.service: main process exited, code=exited, status=217/USER
Jun 18 17:30:51 lsraj systemd: Failed at step USER spawning /root/myhome/cockroach: No such process
Jun 18 17:30:51 lsraj systemd: lsraj.service: control process exited, code=exited status=217
Jun 18 17:30:51 lsraj systemd: Failed to start Cockroach Database Service.
Jun 18 17:30:51 lsraj systemd: Unit lsraj.service entered failed state.
Jun 18 17:30:51 lsraj systemd: lsraj.service failed.
I've moved my comment here to support richer formatting.
I can not advise on your need for the '+', I am only reading the error message for you which says systemd is ignoring the ExecStartPre path because it is not absolute.
Maybe this is a feature that exists in freedesktop.org, but my Redhat 7.6 release (which is what you indicate that you are using) does not include a similar statement (or table) in the systemd.service unit file man page. Plus you are getting a very clear error message about that line in your unit file.
The man page it mentions "-" and "#", but none of the others...
Here is an extract from the man page (and I've provided a link above to the full page).
ExecStartPre=, ExecStartPost=
Additional commands that are executed before or after the command in ExecStart=, respectively. Syntax is the same as for ExecStart=, except that multiple command lines are
allowed and the commands are executed one after the other, serially.
If any of those commands (not prefixed with "-") fail, the rest are not executed and the unit is considered failed.
Note that ExecStartPre= may not be used to start long-running processes. All processes forked off by processes invoked via ExecStartPre= will be killed before the next service
process is run.
I suggest trying to remove the "+" first and see what happens, then progress from there.

Tomatousb, ls command is broken and returns just the query

I made ipkg upgrade on my old tomatousb, which it seems resulted at least in a broken ls command:
[root#tomatousb root]$ /bin/ls /
/
[root#tomatousb root]$ ls /bin
/bin
however, the results are displayed in different colours.
There is also strange behaviour:
[root#tomatousb root]$ echo $PATH
echo $PATH
sh: echo: Permission denied
[root#tomatousb root]$ /bin/echo $PATH
/bin/echo $PATH
/opt/bin:/opt/sbin:/bin:/sbin:/usr/bin:/usr/sbin
I have no glue what's wrong to it.
The logs I see are as following:
/var/log/messages
Jan 1 04:00:11 tomatousb user.info kernel: ipt_recent v0.3.1: Stephen Frost <sfrost#snowman.net>. http://snowman.net/projects/ipt_recent/
Jan 31 23:10:21 tomatousb user.notice root: <<<< MPCSD: Config-files not found in /jffs/config/mpcs & /opt/etc/mpcs!!! Exit. >>>>
Jan 31 23:11:02 tomatousb cron.err crond[143]: time disparity of 25290430 minutes detected
Jan 31 23:37:26 tomatousb authpriv.info dropbear[505]: Child connection from *.*.*.*:*
So, basically, when I do SSH, I get to dropbear.
It seems that during the last ipkg upgrade I got new bash, tcpdump, and two more items, but I can't recall which exactly.. And I can't find the ipkg logfile...
Finally I bumped into my own old discussion when I had the same issue with the same box, and here's what was the reason:
[root#tomatousb mnt]$ cat /opt/etc/profile
#
# Bash initialization script
#
PS1="[\u#\h \W]$ "
PATH=/opt/sbin:/opt/bin:/sbin:/bin:/usr/sbin:/usr/bin
LD_LIBRARY_PATH=/opt/lib:${LD_LIBRARY_PATH}
export PS1 PATH LD_LIBRARY_PATH
[root#tomatousb mnt]$ rm /opt/etc/profile
Then I did reboot, and everything restored back to normal operation!
Do not know what exactly was in that profile file that ruined everything and caused 'memory exhausted' error when running vi.

Add alias to Docker during build

Since I am trying to compile a program during the build phase of a container, I'm including my aliases during the build of the container inside the .bashrc:
RUN cat /path/to/aliases.sh >> ~/.bashrc
When I start the container, all aliases are available. This is already good, but not the behavior that I want.
I've already google around and found out, that the .bashrc file is only loaded when using an interactive shell, which is not the case during the build phase of the container.
I'm trying to force the load of my aliases using:
RUN shopt -s expand_aliases
or
RUN shopt -s expand_aliases && alias
or
RUN /bin/bash -c "both commands listed above..."
Which surprisingly does not yield to the expected outcome. [/irony off]
Now my question: How can I set aliases for the build phase of the container?
Regards
When docker executes each RUN, it calls to the SHELL passing the rest of the line as an argument. The default shell is /bin/sh. Documented here
The problem here is that you need for each layer execution, to set the aliases, because a new shell is launched by each RUN. I didn't find a non-interactive way to get bash read the .bashrc file each time.
So, just for fun I did this, and it's working:
aliasshell.sh
#!/bin/bash
my_ls(){
ls $#
}
$#
Dockerfile
FROM ubuntu
COPY aliasshell.sh /bin/aliasshell.sh
SHELL ["/bin/aliasshell.sh"]
RUN ls -l /etc/issue
RUN my_ls -l /etc/issue
Output:
docker build .
Sending build context to Docker daemon 4.096 kB
Step 1/5 : FROM ubuntu
---> f7b3f317ec73
Step 2/5 : COPY aliasshell.sh /bin/aliasshell.sh
---> Using cache
---> ccdfc54dd0ce
Step 3/5 : SHELL /bin/aliasshell.sh
---> Using cache
---> bb17a8bf1c3c
Step 4/5 : RUN ls -l /etc/issue
---> Running in 15ae8f0bb93b
-rw-r--r-- 1 root root 26 Feb 7 23:55 /etc/issue
---> 0337da801651
Removing intermediate container 15ae8f0bb93b
Step 5/5 : RUN my_ls -l /etc/issue <-------
---> Running in 5f58e0aa4e95
-rw-r--r-- 1 root root 26 Feb 7 23:55 /etc/issue
---> b5060d9c5e48
Removing intermediate container 5f58e0aa4e95
Successfully built b5060d9c5e48

Virt-install can't find image behind symbolic link

I ran into some strange behavior while setting up a script to start kvm instances today, and am hoping you all can weigh in what's going on here.
Setup:
I have a script that starts a kvm with virt-install.
virt-install ... --disk=image.qcow2 ...
I want to run the same script on different versions of image.qcow2, so I created a symbolic link of a to my latest image.
My directory stucture would look something like this:
startKvm.sh
image.qcow2 -> image_v2.0.qcow2
image_v2.0.qcow2
image_v1.0.qcow2
However, when I tried to run my virt-install command, it returned the following error.
ERROR internal error: process exited while connecting to monitor:
datetime qemu-kvm: -drive file=/path/image.qcow2,if=none,id=drive-ide0-0-0,format=qcow2: could not open
disk image /path/image.qcow2: Could not open file: Permission
denied
Thoughts on the cause and alternate solutions?
I've had a similar idea to use symlinks, however in my scenario I run virt-manager via SSH with X forwarding as a user in wheel (admin) group. Creating a virtual guest produced same error:
Unable to complete install: 'internal error: process exited while connecting to monitor: 2018-02-24T07:53:19.064452Z qemu-kvm: -drive file=/var/lib/libvirt/images/archlinux-2018.02.01-x86_64.iso,format=raw,if=none,id=drive-ide0-0-1,readonly=on: could not open disk image /var/lib/libvirt/images/archlinux-2018.02.01-x86_64.iso: Could not open '/var/lib/libvirt/images/archlinux-2018.02.01-x86_64.iso': Permission denied'
Issue is, in fact, with permissions. When setting up an .iso image using qemu-kvm, it changes its ownership to user/group qemu upon deployment. We need to make sure user qemu has access rights throughout whole path to the file.
Test case
User running virt-manager: yahol (wheel/admin group)
Iso or qcow2 image location: /home/yahol/isos/archlinux-2018.02.01-x86_64.iso
Note: please ignore the timestamps as I've hacked together this answer over 2 days. It was late and I was too tired to finish it the previous day.
Starting with image in user's home directory:
$ ll /home/yahol/isos/archlinux-2018.02.01-x86_64.iso
$ -rw-r--r--. 1 yahol yahol 565182464 Feb 24 08:49 archlinux-2018.02.01-x86_64.iso
Creating symlink as root in default path to KVM images:
# cd /var/lib/libvirt/images
# ln -s /home/yahol/isos/archlinux-2018.02.01-x86_64.iso
lrwxrwxrwx. 1 root root 48 Feb 23 21:35 archlinux-2018.02.01-x86_64.iso -> /home/yahol/isos/archlinux-2018.02.01-x86_64.iso
Symlink belongs to user root, while actual file still has user/group of a regular user.
Setting up and deploying virtual machine via virt-manager. Notice how user/group of image file changes to qemu:
$ ll /home/yahol/isos/archlinux-2018.02.01-x86_64.iso
-rw-r--r--. 1 qemu qemu 565182464 Feb 24 08:49 archlinux-2018.02.01-x86_64.iso
At this point VM manager, be it virt-manager or virt-install, throws the aforementioned error. This happens because user qemu doesn't have access to full path. Home dir of user yahol is only accessible to user yahol:
$ ll -d /home/yahol/
drwx------. 7 yahol yahol 258 Feb 24 08:37 /home/yahol/
Now let's create another path to which qemu has full access:
# mkdir -p /Qemu/Test/Iso
# mv archlinux-2018.02.01-x86_64.iso /Qemu/Test/Iso/
# chown -R qemu:qemu /Qemu/
# cd /var/lib/libvirt/images
# ln -s /Qemu/Test/Iso/archlinux-2018.02.01-x86_64.iso
# ll /Qemu/Test/Iso/archlinux-2018.02.01-x86_64.iso
-rw-rw-r--. 1 qemu qemu 565182464 Feb 23 08:53 /Qemu/Test/Iso/archlinux-2018.02.01-x86_64.iso
# ll /var/lib/libvirt/images
lrwxrwxrwx. 1 root root 46 Feb 24 09:39 archlinux-2018.02.01-x86_64.iso -> /Qemu/Test/Iso/archlinux-2018.02.01-x86_64.iso
This works perfectly fine, virtual machine has been deployed and it's running.
Possible solutions:
change ownership to entire path to actual image to user/group qemu
# chown -R qemu:qemu /Qemu/
give read+execute permissions to others (world) to entire path to image
# chmod -R o+rx /Qemu/
WARNING: This might have security implications!
create a directory owned by user/group qemu dedicated to iso images that will be symlinked
# mkdir -p /Qemu/Test/Iso
# chown -R qemu:qemu /Qemu/
Summary:
Even though everything is done by user with root privileges, the actual user working with VM images is qemu. Adding qemu to wheel group would still require providing means to authenticate and, since it's a no-login user, it might be tricky.

Mac OSX no valid sudoers sources found

I am always getting this error. What is the solution ?
As the error message says: your /etc/sudoers file has the wrong permissions.
the normal permissions (on OS X 10.10) are:
$ ls -l /etc/sudoers
-r--r----- 1 root wheel 1293 Sep 19 2012 /etc/sudoers
so get a root shell in some other manner and issue chmod 660 /etc/sudoers
and/or the appropriate chgrp and chown commands.
To get a root shell, it all depends on what you have left as assets.
The failsafe method would be from a recovery partition, but booting in single user mode should be enough in most cases.
Single user mode: boot holding "Command-S"
ref: https://support.apple.com/en-us/HT201573

Resources