Command sudo: not found [closed] - shell

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
When executing my init.sh I am calling the command:
sudo bash ${INSTALLPATH}seafile.sh start
Following this, the error:
seafile_1_f2341d904d27 | /bin/sh: 1: sudo: not found
occurs.
opening the directory "bin" and looking at "sh" it is just some unreadable charakters..
The dockerfile calling init.sh does this by:
FROM debian
#FROM armv7/armhf-debian
MAINTAINER me
# install packages
RUN apt-get update && apt-get install sudo -y
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y \
ca-certificates \
python2.7 \
python-setuptools \
python-imaging \
python-ldap \
python-urllib3 \
sqlite3 \
wget
# Copy scripts
ADD ./scripts /scripts
# set environment variables
ENV SERVER_NAME mdh-seafile
ENV SERVER_IP 127.0.0.1
ENV FILESERVER_PORT 8082
ENV SEAFILE_DIR /data
ENV SEAFILE_VERSION seafile-server-6.3.4
ENV INSTALLPATH /opt/seafile/${SEAFILE_VERSION}/
# clean for smaller image
RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
# Volumes for persistent configuration
VOLUME /opt/seafile/
# added
COPY /opt/seafile/${SEAFILE_VERSION}/seafile.sh .
COPY /opt/seafile/${SEAFILE_VERSION}/seahub.sh .
# set entrypoint
ENTRYPOINT sudo bash /scripts/init.sh
Init.sh:
else
# start seafile
# whoami -> Output: root
sudo bash ${INSTALLPATH}seafile.sh start
sudo bash ${INSTALLPATH}seahub.sh start
# keep seafile running in foreground to prevent docker container shutting down
while true; do
sudo tail -f /opt/seafile/logs/seafile.log
sleep 10
done
fi
I'm executing everything by calling sudo bash install.sh which is executing docker-compose file which links to the components.
The Docker-Compose:
version: '2'
services:
db:
#image: hypriot/rpi-mysql
image: mysql
environment:
- MYSQL_ROOT_PASSWORD=###
volumes:
- /mnt/data/mysql:/var/lib/mysql
duply:
build: .
volumes:
- ./config:/config
- /mnt/data:/mnt/data
- ./webinterface:/var/www/html/MyDigitalHome
- /mnt/guestbackup:/mnt/guestbackup/backup
#- /mnt/usb-removable:/usb-removable
ports:
- "8080:80"
- "24:22"
links:
- db
seafile:
build: seafile/
volumes:
- ./seafile/config:/config
- /mnt/data/seafile:/data
ports:
- "8000:8000"
- "8082:8082"
environment:
- SEAFILE_ADMIN=####mydigitalhome.xy
- SEAFILE_ADMIN_PW=###
owncloud:
build: owncloud/
volumes:
- /mnt/data/owncloud:/data
- ./owncloud/config:/var/www/html/config
ports:
- "8090:80"
links:
- db:mysql
The current errors are:
ERROR: Service 'seafile' failed to build: COPY failed: stat /var/lib/docker/tmp/docker-builder845725722/opt/seafile/seafile-server-6.3.4/seafile.sh: no such file or directory
Attaching to mdh_seafile_1_f2341d904d27, mdh_db_1_46bebe733124, mdh_duply_1_170a5db26129, mdh_owncloud_1_260c3a56f2a5
seafile_1_f2341d904d27 | bash: seafile.sh: No such file or directory
seafile_1_f2341d904d27 | bash: seahub.sh: No such file or directory
seafile_1_f2341d904d27 | tail: cannot open '/opt/seafile/logs/seafile.log' for reading: No such file or directory
seafile_1_f2341d904d27 | tail: no files remaining

I am assuming you are running this in docker.
you can add to your dockerfile
RUN apt update && apt install -y sudo
This should resolve your problem.

Starting from debian image you need to install sudo, you can do it by adding RUN apt-get update && apt-get install sudo -y to the beginning of your Dockerfile. Then rebuild your image with docker build . and run the command again.

Related

Create more than one VNC server on the same host in Docker

I would like to create a VNC server in two Docker containers, calling them ContainerA & ContainerB. ContainerA and ContainerB must both be exposed to the host network via --network="host". When I try to instantiate two VNC servers it fails to be created because the address space is already being used, address being alfred. Even after changing the port being used, display offset, etc, nothing seems to work.
If anyone has had any experience in this that would be great. A solution that I'll shoot down right now is bridging the containers where they each have their own unique address space via bridge. As how the system is configured this cannot happen.
Thanks!
My Current Setup
The setup is that I am now trying a intermediate VNC container and two Firefox containers, still in the works. The work for the two VNC servers exist in the two Dockerfiles listed for FirefoxVNC[A,B].
# compose.yaml
version: '3.7'
services:
vncserver:
build: ./VNCServer
container_name: vncserver
network_mode: "host"
firefoxa:
build: ./FirefoxVNCA
container_name: firefoxvnca
network_mode: "host"
firefoxb:
build: ./FirefoxVNCB
container_name: firefoxvncb
network_mode: "host"
# FirefoxVNCA/Dockerfile
FROM nvidia/cuda:11.7.0-devel-centos7
# RUN yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
RUN yum update -y
RUN yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
RUN yum install -y firefox \
x11vnc \
xorg-x11-server-Xvfb
EXPOSE 5900
#RUN echo "exec firefox" > ~/.xinitrc && chmod +x ~/.xinitrc
#CMD ["x11vnc", "-create", "-forever", "-rfbport", "5901"]
CMD ["/bin/bash", "-c", "-l", "firefox"]
# FirefoxVNCB/Dockerfile
FROM nvidia/cuda:11.7.0-devel-centos7
# RUN yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
RUN yum update -y
RUN yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
RUN yum install -y firefox \
x11vnc \
xorg-x11-server-Xvfb
EXPOSE 5900
#RUN echo "exec firefox" > ~/.xinitrc && chmod +x ~/.xinitrc
#CMD ["x11vnc", "-create", "-forever", "-rfbport", "5901"]
CMD ["/bin/bash", "-c", "-l", "firefox"]

Docker networking issue only under windows 10

I have a Dockerfile and docker-compose.yml that I developed on OSX. It looks like this:
cat Dockerfile:
FROM alpine:3.7
ADD ./app /home/app/
WORKDIR /home/app/
RUN apk add --no-cache postgresql-dev gcc python3 python3-dev musl-dev && \
python3 -m ensurepip && \
rm -r /usr/lib/python*/ensurepip && \
pip3 install --upgrade pip setuptools && \
rm -r /root/.cache && \
pip3 install -r requirements.txt
EXPOSE 5000
ENTRYPOINT ["python3", "app.py"]
cat docker-compose.yml:
version: "3"
services:
web:
build: .
ports:
- "5000:5000"
volumes:
- ./app/:/home/app/
depends_on:
- db
db:
image: postgres:10
environment:
- POSTGRES_USER=testusr
- POSTGRES_PASSWORD=password
- POSTGRES_DB=testdb
expose:
- 5432
I then start via: docker-compose up --build -d and open the web browser on port 5000, it shows a form (as expected). Again, this works perfect on OSX.
However, on Windows 10 it just shows a blank page. And docker port <CONTAINER> shows 5000/tcp -> 0.0.0.0:5000 so I know it's bound to the right port.
Additionally, if I docker logs -f <python container> it doesn't show the request ever getting to the container. Normally a new line is printed (w/status code) for each flask/nginx response. In this case it looks like it's not even getting to the python application container.
So, any idea why this doesn't work on Windows?
So the answer is that you have to use docker-machine ip to see what IP the container is bound to. It doesn't just bind to localhost/127.0.0.1 like it does on OSX.

Docker-Compose mounted files are not owned by root

I'm mounting files from my laptop's/host machine's working directory into the container, and I'm not sure why the mounted files are owned by www-data. q was created on the host machine, so it's owned by root, as expected.
Inside container:
Some other observations:
Host UID (my OS X account): 1783256022 (not sure why it looks like this)
Container UID (www-data): 33
Dockerfile:
FROM nginx:1.13.8
RUN apt-get update
RUN apt-get install -y nano && \
apt-get install -y git && \
apt-get install -y procps && \
apt-get install net-tools
RUN mkdir -p /var/www/html
WORKDIR /var/www/html
docker-compose.yml:
version: '3'
services:
web:
build:
context: ./docker/nginx
image: tc-web:0.1.0
container_name: tc-web
volumes:
- ./:/var/www/html/
- ./docker/nginx/.bashrc:/root/.bashrc
- ./docker/nginx/nginx.conf:/etc/nginx/nginx.conf
- ./docker/nginx/default.conf:/etc/nginx/conf.d/default.conf
ports:
- 8080:80
When you mount a directory using volumes in a Unix like system, files will keep the same permissions they have in the host in terms of uid a gid. That means that the uid that created the files in your host (your user probably) has the same user id of the www-data user inside the container.
When you create a file from inside the container, since the container is run as the root user, files will be owned by the root user (inside and outside the Container).

permission issue with docker under windows

I'm using this scheme :
1/ I'm working on windows 7
2/ I'm using vagrant to mount a "ubuntu/trusty64" box
3/ I apt-get install ansible
4/ I install docker and docker-compose with ansibe
5/ I create a docker image with this dockerfile :
FROM php:7-apache
MAINTAINER Bruno DA SILVA "bruno.dasilva#foo.com"
COPY containers-dirs-and-files/var/www/html/ /var/www/html/
WORKDIR /var/www/html
6/ I run it :
sudo docker build -t 10.100.200.200:5000/pimp-hello-world .
sudo docker run -p 80:80 -d --name test-php 10.100.200.200:5000/pimp-hello-world
7/ apache can't display the page, I have to add :
RUN chmod -R 755 /var/www/html
to the dockerfile in order to have it visible.
so here is my question : can I handle files permission while working on windows (and how)? Or do I have to move under linux?
This happens in Linux. Docker copies the files and put root as owner. The only way I have found to overcome this without using chmod, is archiving the files in a tar file and then use
ADD content.tgz /var/www/html
It will expand automatically
Regards

docker entrypoint running bash script gets "permission denied"

I'm trying to dockerize my node.js app. When the container is built I want it to run a git clone and then start the node server. Therefore I put these operations in a .sh script. And run the script as a single command in the ENTRYPOINT:
FROM ubuntu:14.04
RUN apt-get update && apt-get install -y build-essential libssl-dev gcc curl npm git
#install gcc 4.9
RUN apt-get install -y software-properties-common python-software-properties
RUN add-apt-repository -y ppa:ubuntu-toolchain-r/test
RUN apt-get update
RUN apt-get install -y libstdc++-4.9-dev
#install newst nodejs
RUN curl -sL https://deb.nodesource.com/setup_4.x | sudo -E bash -
RUN apt-get install -y nodejs
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
ADD package.json /usr/src/app/
RUN npm install
ADD docker-entrypoint.sh /usr/src/app/
EXPOSE 8080
ENTRYPOINT ["/usr/src/app/docker-entrypoint.sh"]
My docker-entrypoint.sh looks like this:
git clone git#<repo>.git
git add remote upstream git#<upstream_repo>.git
/usr/bin/node server.js
After building this image and run:
docker run --env NODE_ENV=development -p 8080:8080 -t -i <image>
I'm getting:
docker: Error response from daemon: oci runtime error: exec: "/usr/src/app/docker-entrypoint.sh": permission denied.
I shell into the container and the permission of docker-entrypoint.sh is:
-rw-r--r-- 1 root root 292 Aug 10 18:41 docker-entrypoint.sh
three questions:
Does my bash script have wrong syntax?
How do I change the permission of a bash file before adding it into an image?
What's the best way to run multiple git commands in entrypoint without using a bash script?
Thanks.
"Permission denied" prevents your script from being invoked at all. Thus, the only syntax that could be possibly pertinent is that of the first line (the "shebang"), which should look like #!/usr/bin/env bash, or #!/bin/bash, or similar depending on your target's filesystem layout.
Most likely the filesystem permissions not being set to allow execute. It's also possible that the shebang references something that isn't executable, but this is far less likely.
Mooted by the ease of repairing the prior issues.
The simple reading of
docker: Error response from daemon: oci runtime error: exec: "/usr/src/app/docker-entrypoint.sh": permission denied.
...is that the script isn't marked executable.
RUN ["chmod", "+x", "/usr/src/app/docker-entrypoint.sh"]
will address this within the container. Alternately, you can ensure that the local copy referenced by the Dockerfile is executable, and then use COPY (which is explicitly documented to retain metadata).
An executable file needs to have permissions for execute set before you can execute it.
In your machine where you are building the docker image (not inside the docker image itself) try running:
ls -la path/to/directory
The first column of the output for your executable (in this case docker-entrypoint.sh) should have the executable bits set something like:
-rwxrwxr-x
If not then try:
chmod +x docker-entrypoint.sh
and then build your docker image again.
Docker uses it's own file system but it copies everything over (including permissions bits) from the source directories.
I faced same issue & it resolved by
ENTRYPOINT ["sh", "/docker-entrypoint.sh"]
For the Dockerfile in the original question it should be like:
ENTRYPOINT ["sh", "/usr/src/app/docker-entrypoint.sh"]
The problem is due to original file not having execute permission.
Check original file has permission.
run ls -al
If result get -rw-r--r-- ,
run
chmod +x docker-entrypoint.sh
before docker build!
Remove Dot [.]
This problem take with me more than 3 hours finally, I just tried the problem was in removing dot from the end just.
problem was
docker run -p 3000:80 --rm --name test-con test-app .
/usr/local/bin/docker-entrypoint.sh: 8: exec: .: Permission denied
just remove dot from the end of your command line :
docker run -p 3000:80 --rm --name test-con test-app
Grant execution rights to the file docker-entrypoint.sh
sudo chmod 775 docker-entrypoint.sh
This is a bit stupid maybe but the error message I got was Permission denied and it sent me spiralling down in a very wrong direction to attempt to solve it. (Here for example)
I haven't even added any bash script myself, I think one is added by nodejs image which I use.
FROM node:14.9.0
I was wrongly running to expose/connect the port on my local:
docker run -p 80:80 [name] . # this is wrong!
which gives
/usr/local/bin/docker-entrypoint.sh: 8: exec: .: Permission denied
But you shouldn't even have a dot in the end, it was added to documentation of another projects docker image by misstake. You should simply run:
docker run -p 80:80 [name]
I like Docker a lot but it's sad it has so many gotchas like this and not always very clear error messages...
This is an old question asked two years prior to my answer, I am going to post what worked for me anyways.
In my working directory I have two files: Dockerfile & provision.sh
Dockerfile:
FROM centos:6.8
# put the script in the /root directory of the container
COPY provision.sh /root
# execute the script inside the container
RUN /root/provision.sh
EXPOSE 80
# Default command
CMD ["/bin/bash"]
provision.sh:
#!/usr/bin/env bash
yum upgrade
I was able to make the file in the docker container executable by setting the file outside the container as executable chmod 700 provision.sh then running docker build . .
If you do not use DockerFile, you can simply add permission as command line argument of the bash:
docker run -t <image> /bin/bash -c "chmod +x /usr/src/app/docker-entrypoint.sh; /usr/src/app/docker-entrypoint.sh"
If you still get Permission denied errors when you try to run your script in the docker's entrypoint, just try DO NOT use the shell form of the entrypoint:
Instead of:
ENTRYPOINT ./bin/watcher write ENTRYPOINT ["./bin/watcher"]:
https://docs.docker.com/engine/reference/builder/#entrypoint

Resources