Please help me to add credential in docker image. So if some try to enter in image it ask for credential.
Scenario -
Let say I downloaded a ubuntu image from official site, I did some changes and created a new image ubuntu-myapp.
Now no one can enter in ubuntu to copy or change my code. without provided credential
Create a Dockerfile by adding credentials to the root user and change to a different user something like this.
$ cat Dockerfile
FROM ubuntu:16.04
COPY raghu/varibale.py /root
#create password for the root user. echo "USERNAME:NEWPASSWORD" | chpasswd
RUN echo "root:raghu" | chpasswd
#create a different user for public access.
RUN useradd -ms /bin/bash raghu
#change to the new user
USER raghu
Build the docker image from the Dockerfile. This Docker image can be run by anyone but the script can be executable only by the root user.
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
auth 2.0 0c15c8ef5594 7 seconds ago 112MB
Let's execute the Docker image and check if the user can access the file without the root password:
$ docker run -it auth:2.0 /bin/bash
raghu#17b003083ff7:/$ ls
bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var
raghu#17b003083ff7:/$ cd root
bash: cd: root: Permission denied
raghu#17b003083ff7:/$ su -
Password:
root#17b003083ff7:~# ls
varibale.py
root#17b003083ff7:~# pwd
/root
root#17b003083ff7:~# exit
logout
raghu#17b003083ff7:/$ exit
exit
The file cannot be executed even with privileged option unless provided with root password:
$ docker run -it --privileged auth:2.0 /bin/bash
raghu#3886fb3950f8:/$ ls
bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var
raghu#3886fb3950f8:/$ cd root
bash: cd: root: Permission denied
raghu#3886fb3950f8:/$ su -
Password:
root#3886fb3950f8:~# ls
varibale.py
root#3886fb3950f8:~# exit
logout
raghu#3886fb3950f8:/$ exit
exit
Hope this helps.
In your docker file, you need to set root user with password.
echo 'newpassword' |passwd root --stdin
and make sure that all folder just has got root access to modify any content.
Related
I am trying to build the image with:
docker build -t db-demo .
But i get
RUN mkdir -p /usr/src/app:
#5 0.512 mkdir: cannot create directory '/usr/src/app': Permission denied
The Dockerfile
FROM mcr.microsoft.com/mssql/server
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY . /usr/src/app
RUN chmod +x /usr/src/app/run-initialization.sh
ENV SA_PASSWORD bpassword
ENV ACCEPT_EULA Y
ENV MSSQL_PID Express
EXPOSE 1433
CMD /bin/bash ./entrypoint.sh
The OS is Windows.How to fix this?
If we start the mssql container with an interactive shell:
docker run -it --rm mcr.microsoft.com/mssql/server /bin/bash
and then look at the active user within the container:
mssql#ed73727870bb:/$ whoami
mssql
we see that the active user is mssql. Furthermore, if we look at the permissions for /usr/src inside the container:
mssql#ed73727870bb:/$ ls -lisa /usr | grep -i src
163853 4 drwxr-xr-x 2 root root 4096 Apr 15 2020 src
we see that only root has write-access to directory /usr/src.
Thus, if we want to create a directory /usr/src/app, so that user mssql can write to it, we will have to
create it as root and
grant the appropriate permissions to mssql.
This leads to the following Dockerfile:
FROM mcr.microsoft.com/mssql/server
# change active user to root
USER root
# create the app directory
RUN mkdir -p /usr/src/app
# set mssql as owner of the app directory
RUN chown mssql /usr/src/app
# change back to user mssql
USER mssql
WORKDIR /usr/src/app
# sanity check: try to write a file
RUN echo "Hello from user mssql" > hello.txt
if we build and run this Dockerfile:
docker build -t turing85/my-mssql -f Dockerfile .
docker run -it --rm turing85/my-mssql /bin/bash
We can now see that:
the active user is still mssql:
mssql#85e401ccc3f9:/usr/src/app$ whoami
mssql
a file /usr/src/app/hello.txt has been created, and user mssql has read-access:
mssql#85e401ccc3f9:/usr/src/app$ cat hello.txt
Hello from user mssql
user mssql has write-access to /usr/src/app:
mssql#85e401ccc3f9:/usr/src/app$ touch test.txt && ls -lisa
total 16
171538 4 drwxr-xr-x 1 mssql root 4096 Nov 6 20:13 .
171537 8 drwxr-xr-x 1 root root 4096 Nov 6 20:02 ..
171539 4 -rw-r--r-- 1 mssql root 17 Nov 6 20:02 hello.txt
171604 0 -rw-r--r-- 1 mssql root 0 Nov 6 20:13 test.txt
user mssql has no write-access to /usr/src:
mssql#85e401ccc3f9:/usr/src/app$ touch ../test2.txt
touch: cannot touch '../test2.txt': Permission denied
A comment on the Dockerfile in the post:
It seems that we try to copy an application into the mssql container. I assume this is done to start said application within the mssql container. While this is possible (with some configuration), I strongly advice against this approach. We could instead define two containers (one for the database, one for the application), e.g. through a docker-compose file.
WORKDIR creates the named directory if it doesn't exist. If your only permission problem is while trying to create the directory, you can remove the RUN mkdir line and let Docker create the directory for you.
FROM any-base-image
# Docker creates the directory if it does not exist
# You do not need to explicitly RUN mkdir
WORKDIR /usr/src/app
...
Looking further at this example, the RUN chmod ... line might also fail if the base image has a non-root user that can't access a root-owned directory. COPY will also copy the permissions from the host, so if the file is executable in the host environment you would not need to explicitly chmod +x it after it is COPYed in. That would let you delete all of the RUN lines; you'd be left with COPY and ENV instructions and runtime metadata, none of which should encounter permission problems.
I am using CD for deploying my code to a VPS. This VPS is running ubuntu 16.04 and has a user 'deployer'.
Now when I use ssh deployer#server I get shell access to the server and then when using cd /var/www I get into the /var/www directory.
When I do this from the deployment script, defined in .gitlab-ci.yml I get this error /bin/bash: line 101: cd: /var/www/data/: No such file or directory. I also did ls -al to view the directory structure of /var which turned out not to contain the www directory. So clearly now I have no permission to the www directory.
- rsync -avz --exclude=.env . deployer#devvers.work:/var/www/data/staging/home
- ssh deployer#devvers.work
- cd /var
- ls -al
- cd /var/www
Tthis is the part of the script where it fails. Does anyone know why my user has different permissions when using ssh from the terminal then when using ssh in this script? Coping the files with rsync when fine and all the files were copied.
My guess is that the cd and ls commands that you are trying are actually executed in the runner's environment (be it the host or a docker container, depending on your setup), not on the machine you ssh into.
I'd suggest you rather execute those commands with ssh. An example of creating a file and checking that it has been created:
ssh deployer#devvers.work "touch /var/www/test_file && ls -al /var/www/"
It is best to use an ssh executor, configured through a config.toml:
/etc/gitlab-runner/config.toml:
concurrent = 1
[[runners]]
url = "http://your_gitlab/ci"
token = "xxx..."
name = "yourGitLabCI"
executor = "ssh"
[runners.ssh]
user = "deployer"
host = "devvers.work"
port = "22"
identity_file = "/home/user/.ssh/id_rsa"
Then you .gitlab.yml can simply include
job:
script:
- "ls /var/www"
- "cd /var/www"
...
See also this example.
If you encounter the line 101: cd: issue on a gitlab-runner that is configured as a shell executor there might actually be a .bash_logout file in the gitlab-runner users home directory that causes the issue together with https://gitlab.com/gitlab-org/gitlab-runner/issues/3849
I'm currently using a Jenkins instance inside a docker container.
This image happens to use Tini as PID 1.
When I try open a shell into it with:
$ docker exec -it jenkins /bin/bash
I get this as username:
I have no name!#<container_id_hash>:/$
This is keeping me from using shell born ssh commands from Jenkins jobs that runs inside this container:
$ ssh
$ No user exists for uid 497
$ id
$ uid=497 gid=495 groups=495
I tried creating an user for that uid in /etc/passwd and also a group for that gid in /etc/group but it was a no deal!
I'm only able to run ssh manually if I login as jenkins user like this:
$ docker exec -it --user=jenkins jenkins /bin/bash
I could circle around that using ssh related plugins. But I'm really curious to understand why this happens only with docker images that use Tini as ENTRYPOINT.
UPDATE1
I did something like this in /etc/passwd:
jenkins:x:497:495::/var/jenkins_home:/bin/bash
and this in /etc/group:
jenkins:x:495:
Also tried other names like yesihaveaname and yesihaveagroup instead of jenkins
UPDATE2
I've been in contact with Tini's developer and he does not believe the cause for this problem is Tini as it does not mess around uid or gid, any other leads would be apreciated.
update
good to know (this was to easy, so I overlooked this for some time *facepalm*):
To login into a container as root, just give --user root to your exec command - like: docker exec -ti -u root mycontainername bash ... no need to copy passwd file and set pw-hashes ...
Like your posted link says, the UserID inside the container maybe has no name allocated.
(Although I do not use Tini... ) I solved this problem as following:
1.) execute INSIDE the container (docker exec -ti mycontainername sh):
id # shows the userid (e.g. 1234) and groupid (e.g. 1235) of the current session
2.) execute OUTSIDE the container (on the local machine):
docker cp mycontainername:/etc/passwd /tmp # this copies the passwd-file to from inside the container to my local /tmp-directory
echo "somename:x:1234:1235:somename:/tmp:/bin/bash" >> /tmp/passwd # add some username *!!with the userid and groupid from the output!!* of the `id` command inside the container (CAUTION: do NOT overwrite, do JUST APPEND to the file) - "1234" is just exemplary, do not use it
docker cp /tmp/passwd mycontainername:/etc/passwd # copy the file back, overwriting the /etc/passwd inside the container
Now login to the container (docker exec -ti mycontainername sh) again.
P.S.
If you know the root password of the container you can now switch to root
If you don't have it, you can copy the "/etc/shadow" file out of the container (like above), then edit the root-entry with a known password hash**, then copy it back into the container and then login to the container and run su)
** to get this password hash on your local system:
(1) add a temporary testuser (sudo useradd testdumpuser)
(2) give this user as password (sudo passwd testdumpuser)
(3) look in the /etc/shadow-file for the "testdumpuser"-entry and copy this long odd string after the first ":" until the second ":"
I'm running docker-machine on an El Capitan Mac. I'm trying to mount a host directory onto a specific path within a container. I've boiled my problem down to a simple test case.
docker run -it --volume=/Users/me/directory:/directory debian:jessie bash
I would expect to see the directory /directory within the container. Instead I see the directory /Users/me/directory:/directory.
How do I find the source of this problem and fix it?
EDIT: Formatting.
EDIT: I've found some more incriminating evidence. Certain paths mount correctly, others do not.
Works:
docker run -it --volume=/media/psf/Home/mounts/:/a debian:jessie bash
root#fca3f29340fe:/# ls
a bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var
Doesn't work:
docker run -it --volume=/media/psf/Home/mounts/a:/a debian:jessie bash
root#5d841d1ac9c6:/# ls
bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var
root#5d841d1ac9c6:/# ls /media/psf/Home/mounts
a:
root#5d841d1ac9c6:/# ls /media/psf/Home/mounts/a\:
a
Try like that:
root#:~# docker run -it -v /root/a/:/tmp/a debian:jessie bash
root#e73a28616b51:/# ls /tmp/
a
I've added "/" at the end of the host path and it worked.
I have some files to upload. Usually to edit anything while logged in the server I must precede the command with sudo. That is known.
How do I send a file then as "admin" instead of "root" when I have disabled root login.
scp path\to\file admin#myaddress.com:/var/www/sitename/public/path/
PERMISSION DENIED
In my opinion, either you should give permissions to the admin user or scp your file to /tmp/ and then sudo mv /tmp/yourfile /var/www/sitename/public/path/.
There is no sudo option when we are using scp command from local to server.
Each user will have upload permission to its own folder in home directory ex. home/xxxxuser so use as below:
scp file_source_here xxxuser#yourserver:/home/xxxuser/
Now you can move file from this folder to your destination.
I suggest these two commands as it works in a bash script.
Move the file to tmp as suggested.
scp path\to\file admin#myaddress.com:/tmp
Assuming admin user can do sudo. The ssh option -t allow you to do sudo command.
ssh -t admin#myaddress.com 'sudo chown root:root /tmp/file && sudo mv /tmp/file /var/www/sitename/public/path/'