How do I write a Dockerfile which changes a protected text file in the docker image? - bash

I have a Dockerfile that follows this pattern:
RUN echo "[DOCKER BUILD] Installing image dependencies..." && \
apt-get update -y && \
apt-get install -y sudo package_names ...
RUN useradd -ms /bin/bash builder
# tried this too, same error
# RUN useradd -m builder && echo "builder:builder" | chpasswd && adduser builder sudo
RUN mkdir -p /home/builder && chown -R builder:builder /home/builder
USER builder
RUN sudo sed -i '/hosts:/c\hosts: files dns' /etc/nsswitch.conf
The part that doesn't work is editing /etc/nsswitch.conf ... Why can't I configure my image to edit this file?
I've tried tweaking the useradd several different ways but the current error is:
Step 8/10 : RUN sudo sed -i '/hosts:/c\hosts: files dns' /etc/nsswitch.conf
---> Running in 97cd39584950
sudo: no tty present and no askpass program specified
The command '/bin/sh -c sudo sed -i '/hosts:/c\hosts: files dns' /etc/nsswitch.conf' returned a non-zero code: 1
How do I achieve editing this file inside the image?
A comment here suggests that all operations in dockerfile should be being run as root, which leads me to believe sudo is not needed. Why then do I see this?
RUN sed -i '/hosts:/c\hosts: files dns' /etc/nsswitch.conf
Step 8/10 : RUN sed -i '/hosts:/c\hosts: files dns' /etc/nsswitch.conf
---> Running in ad56ca17944c
sed: couldn't open temporary file /etc/sed8KGQzP: Permission denied

The problem is on the password for sudo, or a request for password. You need to pass ENV_VARIABLES to your container related with removing the sudo request for password, as follows:
<your-container-user> ALL = NOPASSWD: /sbin/poweroff, /sbin/start, /sbin/stop
You need to execute your sudo freely.
Related question:
How to fix 'sudo: no tty present and no askpass program specified' error?

I figured it out -- I can perform this task without sudo, but only if I do it before calling USER builder. It seems docker has the correct access it needs before I create any users.

Related

Changing ownership of a directory/volume using linux in Dockerfile

I'm working on creating a Dockerfile that builds 2 volumes called /data/ and /artifacts/ and one user called "omnibo" and then assigning this user with ownership/permission of these two volumes, I tried using the chown command but after checking the volumes' permissions/ownership are assigned to root user.
This is what's in my Dockerfile script:
FROM alpine:latest
RUN useradd -m omnibo
VOLUME /data/ /artifact/
RUN chown -R omnibo /data /artifact
RUN mkdir -p /var/cache /var/cookbook
COPY fix-joyou.sh /root/joyou.sh
COPY Molsfile /var/file/Molsfile
RUN bash /root/fix-joyou.sh && rm -rf /root/fix-joyou.sh && \
yum -y upgrade && \
yum -y install curl iproute hostname && \
curl -L https://monvo.tool.sh/install.sh | bash && \
/opt/embedded/bin/gem install -N berkshelf && \
/opt/embedded/bin/berks vendor -b /var/cinc/Molsfile /var/cinc/cookbook
ENV RUBYOPT=-r/usr/local/share/ruby-docker-copy-patch.rb
USER omnibo
WORKDIR /home/omnibo
This script runs successfully when creating container but when doing "ll" it shows that these two volumes are assigned to "root", Is there anything I can do to add ownership to "omnibo"?
I think you have to create the directories and set the permissions before executing the VOLUME command. According to the docker documentation: "If any build steps change the data within the volume after it has been declared, those changes will be discarded". See https://docs.docker.com/engine/reference/builder/#volume
Try the following:
FROM alpine:latest
RUN useradd -m omnibo
RUN mkdir /data /artifact && chown -R omnibo /data /artifact
VOLUME /data/ /artifact/
...

Lambda gives No such file or directory(cant find the script file) error while running a bash script inside container. But this is successful in local

I am creating a lambda function from a docker image, this docker image actually runs a bash script inside of the docker container but when I tried to test that then it gives this following error. But this is successful in local. I tested with commented and uncommented entrypoint. Please help me to figure it out.
The dockerfile -
FROM amazon/aws-cli
USER root
ENV AWS_ACCESS_KEY_ID XXXXXXXXXXXXX
ENV AWS_SECRET_ACCESS_KEY XXXXXXXXXXXXX
ENV AWS_DEFAULT_REGION ap-south-1
# RUN mkdir /tmp
COPY main.sh /tmp
WORKDIR /tmp
RUN chmod +x main.sh
RUN touch file_path_final.txt
RUN touch file_path_initial.txt
RUN touch output_final.json
RUN touch output_initial.json
RUN chmod 777 file_path_final.txt
RUN chmod 777 file_path_initial.txt
RUN chmod 777 output_final.json
RUN chmod 777 output_initial.json
RUN yum install jq -y
# ENTRYPOINT ./main.sh ; /bin/bash
ENTRYPOINT ["/bin/sh", "-c" , "ls && ./tmp/main.sh"]
The error -
START RequestId: 8d689260-e500-45d7-aac8-ae260834ed96 Version: $LATEST
/bin/sh: ./tmp/main.sh: No such file or directory
/bin/sh: ./tmp/main.sh: No such file or directory
END RequestId: 8d689260-e500-45d7-aac8-ae260834ed96
REPORT RequestId: 8d689260-e500-45d7-aac8-ae260834ed96 Duration: 58.29 ms Billed Duration: 59 ms Memory Size: 128 MB Max Memory Used: 3 MB
RequestId: 8d689260-e500-45d7-aac8-ae260834ed96 Error: Runtime exited with error: exit status 127
Runtime.ExitError
Here how i did it to Run A C++ over a bash script :
#Pulling the node image from the AWS WCR PUBLIC docker hub.
FROM public.ecr.aws/lambda/provided:al2.2022.10.11.10
#Setting the working directory to /home.
WORKDIR ${LAMBDA_RUNTIME_DIR}
#Copying the contents of the current directory to the working directory.
COPY . .
#This is installing ffmpeg on the container.
RUN yum update -y
# Install sudo, wget and openssl, which is required for building CMake
RUN yum install sudo wget openssl-devel -y
# Install development tools
RUN sudo yum groupinstall "Development Tools" -y
# Download, build and install cmake
RUN yum install -y make
#RUN wget https://github.com/Kitware/CMake/releases/download/v3.22.3/cmake-3.22.3.tar.gz && tar -zxvf cmake-3.22.3.tar.gz && cd ./cmake-3.22.3 && ./bootstrap && make && sudo make install
RUN yum -y install gcc-c++ libcurl-devel cmake3 git
RUN ln -s /usr/bin/cmake3 /usr/bin/cmake
RUN ln -s /usr/bin/ctest3 /usr/bin/ctest
RUN ln -s /usr/bin/cpack3 /usr/bin/cpack
# get cmake versin
RUN cmake --version
RUN echo $(cmake --version)
#This is installing the nodejs and npm on the container.
RUN ./build.sh
RUN chmod 755 run.sh bootstrap
#This is running the nodejs application.
CMD [ "run.sh" ]
You will need a bootstrap file in the root directory : (FROM DOC)
#!/bin/sh
set -euo pipefail
# Initialization - load function handler
source $LAMBDA_RUNTIME_DIR/"$(echo $_HANDLER | cut -d. -f1).sh"
# Processing
while true
do
HEADERS="$(mktemp)"
# Get an event. The HTTP request will block until one is received
EVENT_DATA=$(curl -sS -LD "$HEADERS" -X GET "http://${AWS_LAMBDA_RUNTIME_API}/2018-06-01/runtime/invocation/next")
# Extract request ID by scraping response headers received above
REQUEST_ID=$(grep -Fi Lambda-Runtime-Aws-Request-Id "$HEADERS" | tr -d '[:space:]' | cut -d: -f2)
# Run the handler function from the script
RESPONSE=$($(echo "$_HANDLER" | cut -d. -f2) "$EVENT_DATA")
# Send the response
curl -X POST "http://${AWS_LAMBDA_RUNTIME_API}/2018-06-01/runtime/invocation/$REQUEST_ID/response" -d "$RESPONSE"
done

Cant't build Jenkins latest within Docker

******** UPDATE *********
Bash script has no errors, checked with https://www.shellcheck.net/
Adding to the Dockerfilethe line
RUN tty | sed -e "s:/dev/::"
Outputs:
No tty
Next line on Dockerfile always fails:
ENTRYPOINT ["/usr/local/bin/jenkins.sh"]
I leave an image in order to clarify. In short, I think I need to attach a tty in some way to the batch script, but dunno how to do it.
Thanks
------------------- OLD CONTENT -------------------
I need to update a Jenkins image to 2.138.2. An excerpt of the original Dockerfile is as follows:
FROM openjdk:8-jdk
RUN apt-get update && apt-get install -y git curl && rm -rf /var/lib/apt/lists/*
# ...
# Use tini as subreaper in Docker container to adopt zombie processes
COPY tini_pub.gpg ${JENKINS_HOME}/tini_pub.gpg
RUN curl -fsSL https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini-static-$(dpkg --print-architecture) -o /sbin/tini \
&& curl -fsSL https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini-static-$(dpkg --print-architecture).asc -o /sbin/tini.asc \
&& gpg --import ${JENKINS_HOME}/tini_pub.gpg \
&& gpg --verify /sbin/tini.asc \
&& rm -rf /sbin/tini.asc /root/.gnupg \
# ...
ENTRYPOINT ["/sbin/tini", "--", "/usr/local/bin/jenkins.sh"]
Using this Dockerfile FAILS due to gpg --import statement now needs to be fixed using --no-tty option. So that line remains as follows:
&& gpg --no-tty --import ${JENKINS_HOME}/tini_pub.gpg \
That's not fine since the execution of jenkins.sh now fails in several ways. The code of the script starts as follows:
#! /bin/bash -e
: "${JENKINS_WAR:="/usr/share/jenkins/jenkins.sh
This script is called from the Dockerfile in this line:
ENTRYPOINT ["/sbin/tini", "--", "/usr/local/bin/jenkins.sh"]
But now fails with several errors and seems to be impossible to process the file, nor removing the shebang line, nor removing the '-' or the '-e' option. The rest of the file is not processed fine if we change bash to other shell (not odd) nor removing the -e option (if I do that, the entrypoint does not find the jenkins.sh script).
Sumarizing, I've needed to remove a tty from gpg but doing that, I've lost access to bash scripting.
I've checked about the applied workaround, the workaround is described here ( (if I'm right, case is number 8, gpg might write to the tty at some point):
https://lists.gnupg.org/pipermail/gnupg-users/2017-April/058162.html
Is there any way to attach a tty to the entrypointor having any settings in the script in order to allow this work fine?
Thanks.
Finally runned on a Linux VM and no problems. Running it on Windows is the problem.

switch to root account and execute cd /root

below is code preview
function f1{
sudo su -
cd /root
yum install expect -y
wget <some url>
}
ssh ec2-user#<ip> -i <key> "$(typeset -f f1); f1"
when I am running this script. It is hanging. I think it is not switching to root and executing rest of lines.
Can somebody help me to make it work.

How to start multiple processes for a Docker container in a bash script

I found very strange behaviour when I build and run docker container. I would like to have container with cassandra and ssh.
In my Dockerfile I've got:
RUN echo "deb http://www.apache.org/dist/cassandra/debian 20x main" | sudo tee -a /etc/apt/sources.list
RUN echo "deb-src http://www.apache.org/dist/cassandra/debian 20x main" | sudo tee -a /etc/apt/sources.list
RUN gpg --keyserver pgp.mit.edu --recv-keys 4BD736A82B5C1B00
RUN apt-key add ~/.gnupg/pubring.gpg
RUN apt-get update
RUN apt-get -y install cassandra
And then for ssh
RUN apt-get install -y openssh-server
RUN mkdir /var/run/sshd
RUN echo '{{ docker_ssh_user }}:{{docker_ssh_password}}' | chpasswd
EXPOSE 22
And I added start script to run everything I want:
USER root
ADD start start
RUN chmod 777 start
CMD ["sh" ,"start"]
And here comes problem. When I have start like this below:
#!/bin/bash
/usr/sbin/sshd -D
/usr/sbin/cassandra -f
SSH is working well. I can do ssh root#172.17.0.x. After I log in container I try to run cqlsh to ensure that cassandra is working. But cassandra is not started for some reason and I can't access cqlsh. I've also checked /var/log/cassandra/ but it was empty.
In second scenario I change my start script to this:
#!/bin/bash
/usr/sbin/sshd -D & /usr/sbin/cassandra/ -f
And I again try to connect ssh root#172.17.0.x and then when I run cqlsh inside container I have connection to cqlsh.
So I was thinking that ampersand & is doing some voodoo that all works well ?
Why I can't run bash staring script with one command below another?
Or I'm missing something else??
Thanks for reading && helping.
Thanks to my friend linux guru we found the reason of error.
/usr/sbin/sshd -D means that -D : When this option is specified, sshd will not detach and does not become a deamon. This allows easy monitoring of sshd
So in the first script sshd -D was blocking next command to run.
In second script I've got & which let sshd -D go background and then cassandra could start.
Finally I've got this version of script:
#!/bin/bash
/usr/sbin/sshd
/usr/sbin/cassandra -f

Resources