Puppeteer sandbox : No usable sandbox - linux-kernel

I installed Puppeteer to use it in the generation of pdf / minuatures, but I can not activate and configure Chrome Linux Sandbox. Always the same error message :
(node:46) UnhandledPromiseRejectionWarning: Error: Failed to launch chrome!
[1208/055442.253403:FATAL:zygote_host_impl_linux.cc(116)] No usable sandbox! Update your kernel or see https://chromium.googlesource.com/chromium/src/+/master/docs/linux_suid_sandbox_development.md for more information on developing with the SUID sandbox. If you want to live dangerously and need an immediate workaround, you can try using --no-sandbox.
I followed the steps mentioned in the official documentation, but without success
# cd to the downloaded instance
cd <project-dir-path>/node_modules/puppeteer/.local-chromium/linux-<revision>/chrome-linux/
sudo chown root:root chrome_sandbox
sudo chmod 4755 chrome_sandbox
# copy sandbox executable to a shared location
sudo cp -p chrome_sandbox /usr/local/sbin/chrome-devel-sandbox
# export CHROME_DEVEL_SANDBOX env variable
export CHROME_DEVEL_SANDBOX=/usr/local/sbin/chrome-devel-sandbox

Try with
sudo sysctl -w kernel.unprivileged_userns_clone=1
It will allows you, as unprivileged user, to access the sandbox of chromium.
This is temporary and active only until reboot.

You likely have the setuid bit wrong because of the cp command :
$ sudo touch orig
$ ls -l orig
-rw-r--r-- 1 root root 0 févr. 11 23:31 orig
$ sudo chmod 4755 orig
$ ls -l orig
-rwsr-xr-x 1 root root 0 févr. 11 23:31 orig
$ sudo cp orig new
$ ls -l new
-rwxr-xr-x 1 root root 0 févr. 11 23:31 new
The setuid bit (4th character) was changed from s to x after cp.

Related

How To Docker Copy to user root ~

I'm writing a Dockerfile to run ROS on my Windows rig and I can't seem to get this COPY command to copy to the container's user root or any sub directory there. I've tried a few things, including messing with the ownership. I know file is ugly but still learning. Not really sure what the issue is here.
This file sits next to a /repos dir which has a git repo within it which can be found here (the ros-noetic branch). This is also the location from which I build and run the container from.
Overall objective is to get roscore to run (which it has been), then exec in with another terminal and get rosrun ros_essentials_cpp (node name) to actually work
# ros-noetic with other stuff added
FROM osrf/ros:noetic-desktop-full
SHELL ["/bin/bash", "-c"]
RUN apt update
RUN apt install -y git
RUN apt-get update && apt-get -y install cmake protobuf-compiler
RUN bash
RUN . /opt/ros/noetic/setup.bash && mkdir -p ~/catkin_ws/src && cd ~/catkin_ws/ && chmod 777 src && catkin_make && . devel/setup.bash
RUN cd /
RUN mkdir /repos
COPY /repos ~/catkin_ws/src
RUN echo ". /opt/ros/noetic/setup.bash" >> ~/.bashrc
Expanding tilde to home directory is a shell feature, which apparently isn't supported in Dockerfile's COPY command. You're putting the files into a directory which is literally named ~, i.e. your container image probably contains something like this:
...
dr-xr-xr-x 13 root root 0 Jun 9 00:07 sys
drwxrwxrwt 7 root root 4096 Nov 13 2020 tmp
drwxr-xr-x 13 root root 4096 Nov 13 2020 usr
drwxr-xr-x 18 root root 4096 Nov 13 2020 var
drwxr-xr-x 2 root root 4096 Jun 9 00:07 ~ <--- !!!
Since root's home directory is always /root, you can use this:
COPY /repos /root/catkin_ws/src
You need to pay attention on the docker context.
When you build docker, you are adding the path to build your image.
If you are not on the / folder, your COPY /repos command won't work.
Try to change the docker context with that:
docker build /

Virt-install can't find image behind symbolic link

I ran into some strange behavior while setting up a script to start kvm instances today, and am hoping you all can weigh in what's going on here.
Setup:
I have a script that starts a kvm with virt-install.
virt-install ... --disk=image.qcow2 ...
I want to run the same script on different versions of image.qcow2, so I created a symbolic link of a to my latest image.
My directory stucture would look something like this:
startKvm.sh
image.qcow2 -> image_v2.0.qcow2
image_v2.0.qcow2
image_v1.0.qcow2
However, when I tried to run my virt-install command, it returned the following error.
ERROR internal error: process exited while connecting to monitor:
datetime qemu-kvm: -drive file=/path/image.qcow2,if=none,id=drive-ide0-0-0,format=qcow2: could not open
disk image /path/image.qcow2: Could not open file: Permission
denied
Thoughts on the cause and alternate solutions?
I've had a similar idea to use symlinks, however in my scenario I run virt-manager via SSH with X forwarding as a user in wheel (admin) group. Creating a virtual guest produced same error:
Unable to complete install: 'internal error: process exited while connecting to monitor: 2018-02-24T07:53:19.064452Z qemu-kvm: -drive file=/var/lib/libvirt/images/archlinux-2018.02.01-x86_64.iso,format=raw,if=none,id=drive-ide0-0-1,readonly=on: could not open disk image /var/lib/libvirt/images/archlinux-2018.02.01-x86_64.iso: Could not open '/var/lib/libvirt/images/archlinux-2018.02.01-x86_64.iso': Permission denied'
Issue is, in fact, with permissions. When setting up an .iso image using qemu-kvm, it changes its ownership to user/group qemu upon deployment. We need to make sure user qemu has access rights throughout whole path to the file.
Test case
User running virt-manager: yahol (wheel/admin group)
Iso or qcow2 image location: /home/yahol/isos/archlinux-2018.02.01-x86_64.iso
Note: please ignore the timestamps as I've hacked together this answer over 2 days. It was late and I was too tired to finish it the previous day.
Starting with image in user's home directory:
$ ll /home/yahol/isos/archlinux-2018.02.01-x86_64.iso
$ -rw-r--r--. 1 yahol yahol 565182464 Feb 24 08:49 archlinux-2018.02.01-x86_64.iso
Creating symlink as root in default path to KVM images:
# cd /var/lib/libvirt/images
# ln -s /home/yahol/isos/archlinux-2018.02.01-x86_64.iso
lrwxrwxrwx. 1 root root 48 Feb 23 21:35 archlinux-2018.02.01-x86_64.iso -> /home/yahol/isos/archlinux-2018.02.01-x86_64.iso
Symlink belongs to user root, while actual file still has user/group of a regular user.
Setting up and deploying virtual machine via virt-manager. Notice how user/group of image file changes to qemu:
$ ll /home/yahol/isos/archlinux-2018.02.01-x86_64.iso
-rw-r--r--. 1 qemu qemu 565182464 Feb 24 08:49 archlinux-2018.02.01-x86_64.iso
At this point VM manager, be it virt-manager or virt-install, throws the aforementioned error. This happens because user qemu doesn't have access to full path. Home dir of user yahol is only accessible to user yahol:
$ ll -d /home/yahol/
drwx------. 7 yahol yahol 258 Feb 24 08:37 /home/yahol/
Now let's create another path to which qemu has full access:
# mkdir -p /Qemu/Test/Iso
# mv archlinux-2018.02.01-x86_64.iso /Qemu/Test/Iso/
# chown -R qemu:qemu /Qemu/
# cd /var/lib/libvirt/images
# ln -s /Qemu/Test/Iso/archlinux-2018.02.01-x86_64.iso
# ll /Qemu/Test/Iso/archlinux-2018.02.01-x86_64.iso
-rw-rw-r--. 1 qemu qemu 565182464 Feb 23 08:53 /Qemu/Test/Iso/archlinux-2018.02.01-x86_64.iso
# ll /var/lib/libvirt/images
lrwxrwxrwx. 1 root root 46 Feb 24 09:39 archlinux-2018.02.01-x86_64.iso -> /Qemu/Test/Iso/archlinux-2018.02.01-x86_64.iso
This works perfectly fine, virtual machine has been deployed and it's running.
Possible solutions:
change ownership to entire path to actual image to user/group qemu
# chown -R qemu:qemu /Qemu/
give read+execute permissions to others (world) to entire path to image
# chmod -R o+rx /Qemu/
WARNING: This might have security implications!
create a directory owned by user/group qemu dedicated to iso images that will be symlinked
# mkdir -p /Qemu/Test/Iso
# chown -R qemu:qemu /Qemu/
Summary:
Even though everything is done by user with root privileges, the actual user working with VM images is qemu. Adding qemu to wheel group would still require providing means to authenticate and, since it's a no-login user, it might be tricky.

Mac OSX no valid sudoers sources found

I am always getting this error. What is the solution ?
As the error message says: your /etc/sudoers file has the wrong permissions.
the normal permissions (on OS X 10.10) are:
$ ls -l /etc/sudoers
-r--r----- 1 root wheel 1293 Sep 19 2012 /etc/sudoers
so get a root shell in some other manner and issue chmod 660 /etc/sudoers
and/or the appropriate chgrp and chown commands.
To get a root shell, it all depends on what you have left as assets.
The failsafe method would be from a recovery partition, but booting in single user mode should be enough in most cases.
Single user mode: boot holding "Command-S"
ref: https://support.apple.com/en-us/HT201573

Meteor will not run without Sudo?

On OSX Yosemite and the latest version of meteor (1.0.1), no matter how many times I uninstall and reinstall it, I can't seem to get it running without sudo. My user account is an administrator account. But meteor refuses to run without sudo. The errors I'm getting are all:
-bash: meteor: command not found
I've seen a few posts on here with similar problems. I've tried repairing disk permissions with disk utility. I've tried:
sudo chown -R $myUsername /usr/local/bin/meteor
I'm not sure what else I can do, because it seems to be a permissions issue. Does anyone have any suggestions?
Additional info that might help:
$ sudo which meteor
/usr/local/bin/meteor
$ sudo ls -l /usr/local/bin/meteor
-rwxrwxrwx 1 root wheel 3528 Dec 18 23:14 /usr/local/bin/meteor
$ ls -ld /usr/local/bin
drwx------ 6 502 wheel 204 Dec 18 23:14 /usr/local/bin
By the way, ls -l /usr/local/bin/meteor only works with sudo.
After we clarified the permissions of the meteor executable and its base directory,
the problem became quite clear:
The Meteor binary is located in /usr/local/bin/meteor
Your user didn't have permission to the directory /usr/local/bin
The steps to resolve:
Add permission on the base directory: sudo chmod +rx /usr/local/bin
If necessary, add the base directory to PATH: PATH=$PATH:/usr/local/bin
For future reference:
When you get this kind of error: -bash: XYZ: command not found
The first thing to check is find the absolute path of XYZ, for example /path/to/XYZ
Try to run with the absolute path /path/to/XYZ
If running with /path/to/XYZ gives -bash: /path/to/XYZ: Permission denied that means you have a problem with permissions on the file and/or directories:
You need read and exec permission on the file itself: sudo chmod +rx /path/to/XYZ
You need exec permission on all path elements leading up to the file: sudo chmod +x /path /path/to
After fixing permission issues, running with /path/to/XYZ should work
After fixing permission issues, if running with XYZ (without full path) still doesn't work, that means /path/to is not on your PATH. Fix with PATH=$PATH:/path/to
Note: the above sudo chmod commands give permissions (read and exec) to all users: owner + group + other. In the case of the OP (and in most common cases), this is perfectly fine.
In situations with more sophisticated permission setup, you might need to be more specific, and use g+rx instead of +rx.
(for the record)
If it works with sudo, and without sudo you get command not found, that means that meteor is on the PATH for root but not for your user. To make it work for your user, you need to find the path to meteor and add it to your user's PATH. For example:
Become root with sudo su -
Find the path of meteor, run command: which meteor
Logout from root (Control-D) to return to your user
Add the base directory to PATH, for example if earlier which meteor gave you /usr/local/bin/meteor, then do this: PATH=$PATH:/usr/local/bin
After this, it should work with your user. To make it "permanent", add the last step in your ~/.bashrc.
If this still doesn't work, then perhaps your user doesn't have the execute permission on the file. Fix that with this command:
sudo chmod +x /usr/local/bin/meteor
From your comments it also seems your user doesn't have permission on the /usr/local/bin directory itself. Fix that with this command:
sudo chmod +rx /usr/local/bin
Shouldn't need an admin account to run it, standard user account works fine. You can locate the meteor file by typing which meteor. It will tell you what file is being used to execute.
Try removing the .meteor folder in your home directory, something like rm -rf ~/.meteor and the script from the bin folder rm /usr/local/bin/meteor or rm 'which meteor' (speech marks there are the ones above ~)
And then reinstall meteor without sudo using the curl https://install.meteor.com/ | sh command.
Should hopefully install with all the correct permissions...

Automake/autoconf configuration to install as SUID

Is there a way to configure a binary to be installed as "SUID" using automake/autoconf?
Is there any magick that can lead a make install to set the suid bit of a given binary target?
NOTE:
I am running a "fakerooted" make install inside a script to create a tar file.
I tried:
# Makefile.am
bin_PROGRAMS = my_bin
#...
install-exec-hook:
echo "#### Setting SUID for my_bin. ####"
ls -l $(DESTDIR)$(bindir)/my_bin
chmod 4755 $(DESTDIR)$(bindir)/my_bin
ls -l $(DESTDIR)$(bindir)/my_bin
echo "####-------------------------------####"
But with no success.
During make install I see:
#### Setting SUID for sudo_script. ####
-rwxr-xr-x 1 root root 8704 Mar 28 13:30 /install.pak/usr/bin/my_bin
-rwsr-xr-x 1 root root 8704 Mar 28 13:30 /install.pak/usr/bin/my_bin
####-------------------------------####
So one could think it is a problem with fakeroot, but if I move the chmod out of Makefile.am to my packaging script, it works. This is enough to convince me fakeroot is doing its job.
Thanks.
GAHH!!
Someone unconsciously put in my script, after $FAKEROOT make install:
$FAKEROOT chmod 755 $PAK_DIR/usr/bin/*
replacing all the permissions writen by make install.
Removing this line the install-exec-hook works as expected and the SUID bit is preserved...
(Were did I put my ax?...)

Resources