Uploading file to S3 Bucket from /tmp/ - bash

I automated my apache http logs to be stored in the tmp directory, which is running the script correctly until moving the file to the tmp directory. Now I have added few more lines of script to move those .tar files to the S3 bucket. When I perform this command manually they are getting moved to the s3 bucket but I do not want to do it everyday since its a daily job and I would want to automate them.
The .tar file is present in the /tmp/ directory when I manually go there and look but the awscl fails to locate to it.
The error I am getting is : the user provided path does not exist when I run the script.
the lines of code I added were these,
sudo apt update -y
sudo apt install apache2
sudo ufw allow 'Apache'
sudo systemctl start apache2
myname="abcd"
sudo tar -cvf $myname-httpd-logs-date +'%d%m%Y-%H%M%S'.tar /var/log/apache2/*.log
sudo mv *.tar /tmp/
sudo apt install awscli -y
s3_bucket="s3_test"
aws s3 \
cp /tmp/$myname-httpd-logs-$(date +'%d%m%Y-%H%M%S').tar \
s3://$s3_bucket/$myname-httpd-logs-date +'%d%m%Y-%H%M%S'.tar
Can anyone help me out in figuring out why this error has occurred and how to fix it.
Error

You appear to be calling datetime multiple times to construct different filenames. Rather than calling it multiple times, if you call it once, you will ensure that each time you refer to the tar filename, it uses the same name:
myname="abcd"
target_name=$myname-httpd-logs-`date +'%d%m%Y-%H%M%S'.tar`
sudo tar -cvf $target_name /var/log/apache2/*.log
sudo mv $target_name /tmp/
s3_bucket="s3_test"
aws s3 \
cp /tmp/$target_name \
s3://$s3_bucket/$target_name

There is not needed ' in your command in $(date +'%d%m%Y-%H%M%S). It should be:
aws s3 \
cp /tmp/$myname-httpd-logs-$(date +%d%m%Y-%H%M%S).tar \
s3://$s3_bucket/$myname-httpd-logs-${timestamp}.tar

And if you want the unneeded quote
aws s3 \
cp /tmp/$myname-httpd-logs-$(date +'%d%m%Y-%H%M%S').tar \
s3://$s3_bucket/$myname-httpd-logs-${timestamp}.tar
be sure to close it (the syntax color gives you a clue).

Related

Changing ownership of a directory/volume using linux in Dockerfile

I'm working on creating a Dockerfile that builds 2 volumes called /data/ and /artifacts/ and one user called "omnibo" and then assigning this user with ownership/permission of these two volumes, I tried using the chown command but after checking the volumes' permissions/ownership are assigned to root user.
This is what's in my Dockerfile script:
FROM alpine:latest
RUN useradd -m omnibo
VOLUME /data/ /artifact/
RUN chown -R omnibo /data /artifact
RUN mkdir -p /var/cache /var/cookbook
COPY fix-joyou.sh /root/joyou.sh
COPY Molsfile /var/file/Molsfile
RUN bash /root/fix-joyou.sh && rm -rf /root/fix-joyou.sh && \
yum -y upgrade && \
yum -y install curl iproute hostname && \
curl -L https://monvo.tool.sh/install.sh | bash && \
/opt/embedded/bin/gem install -N berkshelf && \
/opt/embedded/bin/berks vendor -b /var/cinc/Molsfile /var/cinc/cookbook
ENV RUBYOPT=-r/usr/local/share/ruby-docker-copy-patch.rb
USER omnibo
WORKDIR /home/omnibo
This script runs successfully when creating container but when doing "ll" it shows that these two volumes are assigned to "root", Is there anything I can do to add ownership to "omnibo"?
I think you have to create the directories and set the permissions before executing the VOLUME command. According to the docker documentation: "If any build steps change the data within the volume after it has been declared, those changes will be discarded". See https://docs.docker.com/engine/reference/builder/#volume
Try the following:
FROM alpine:latest
RUN useradd -m omnibo
RUN mkdir /data /artifact && chown -R omnibo /data /artifact
VOLUME /data/ /artifact/
...

Script is running perfectly on co-workers devices but gives me 'Invalid cross-device link'

My script is running perfectly on co-workers devices (MacOSX with Docker Desktop same as me), but gives me every time the same error and it does not move or only half, the libraries in the deps directory:
OSError: [Errno 18] Invalid cross-device link: '/tmp/pip-target-dzwe_2kc/lib/python/numpy' ->
'/foo/python/numpy'
My script :
#!/bin/bash
export PKG_DIR='python'
export SIDE_DEPS_DIR='deps'
rm -rf ${PKG_DIR} && mkdir -p ${PKG_DIR}
rm -rf ${SIDE_DEPS_DIR} && mkdir -p ${SIDE_DEPS_DIR}
docker run --rm -v $(pwd):/foo -w /foo lambci/lambda:build-python3.8 \
pip3 install -r requirements.txt -t ${PKG_DIR}
# move stuff to deps
find /${PKG_DIR} -maxdepth 1 -type d \
\( -name "pandas*" -o -name "numpy*" -o -name "numpy.libs*" -o -name "scipy*" -o -name "scipy.libs*" \) -exec mv '{}' ${SIDE_DEPS_DIR} \;
# zip side dependencies
zip -r ge_deps.zip deps
# zip layer
zip -r layers-python38-great-expectations.zip python
It's a script which uses a public lambda docker image to create a lambda layer (basically a zip that contains libraries) and which removes unwanted libraries to put them in another folder deps.
The above code will use the public Docker image lambci / lambda and will install in the empty python directory, libraries which come from a python package which is called 'great-expectations' and which helps to test pipelines of data (which is specified in requirements.txt and is great-expectations==0.12.7)
I have been stuck with this problem for a while and have not found a solution.
Had this exact problem just now.
/tmp and /foo are different devices - /tmp is within the docker OS and /foo is mapped to your local OS.
pip seems to be using shutil.rename() to move the built package from tmp to the final output location (/foo). This fails because they are different devices. Ideally pip would use shutil.move() instead, which will deal with a cross-device move.
As a workaround, you can change the temp folder used by PIP by setting TMPDIR before invoking the pip command. i.e. export TMPDIR=/foo/tmp before calling pip in the docker image. So, the whole command might be something like
docker run --rm -v $(pwd):/foo -w /foo lambci/lambda:build-python3.8 \
/bin/bash -c "export TMPDIR=/foo/tmp && pip3 install -r requirements.txt -t ${PKG_DIR}"
(multiple commands soln taken from https://www.edureka.co/community/10736/how-to-run-multiple-commands-in-docker-at-once - open to better suggestions!)
This will likely be slower because it's using the local OS for temp files, but it avoids the attempted 'rename' across devices from the temp folder to the final output folder.

Is it possible to compare an ENV variable in the Dockerfile with a regex or similar things like contains?

I'd like to check my VERSION argument, if the regex ".*feature.*" finds the exact string in the version which contains a "feature" in it, to make a conditional docker image.
The dockerfile looks like this right now:
FROM docker.INSERTURL.com/fe/plattform-nginx:1.14.0-01
ARG ARTIFACTORY_USER
ARG ARTIFACTORY_PW
ARG VERSION
# Download sources from Repository
ADD https://${ARTIFACTORY_USER}:${ARTIFACTORY_PW}#INSERTURL.com/artifactory/api/npm/angular.npm/angular-frontend-app/-/angular-frontend-app-${VERSION}.tgz app.tar.gz
# Extract and move to nginx html folder
RUN tar -xzf app.tar.gz
RUN mv ./package/dist/angular-frontend-app/* /usr/share/nginx/html
# Start nginx via script, which replaces static urls with environment variables
ADD start.sh /usr/share/nginx/start.sh
RUN chmod +x /usr/share/nginx/start.sh
# Overwrite nginx.conf
ADD nginx.conf /etc/nginx/nginx.conf
# Fix permissions for runtime
RUN chmod 777 /var/log/nginx /usr/share/nginx/html
CMD /usr/share/nginx/start.sh
I'd like it to only download the sources from Artifactory, if the VERSION doesn't contains "feature" in it's name.
I imagine it'd look like this:
FROM docker.INSERTURL.com/fe/plattform-nginx:1.14.0-01
ARG ARTIFACTORY_USER
ARG ARTIFACTORY_PW
ARG VERSION
if [ "$VERSION" = ".*feature.*" ]; then
# Download sources from Repository
ADD https://${ARTIFACTORY_USER}:${ARTIFACTORY_PW}#INSERTURL.com/artifactory/api/npm/angular.npm/angular-frontend-app/-/angular-frontend-app-${VERSION}.tgz app.tar.gz
fi
# Extract and move to nginx html folder
RUN tar -xzf app.tar.gz
RUN mv ./package/dist/angular-frontend-app/* /usr/share/nginx/html
# Start nginx via script, which replaces static urls with environment variables
ADD start.sh /usr/share/nginx/start.sh
RUN chmod +x /usr/share/nginx/start.sh
# Overwrite nginx.conf
ADD nginx.conf /etc/nginx/nginx.conf
# Fix permissions for runtime
RUN chmod 777 /var/log/nginx /usr/share/nginx/html
CMD /usr/share/nginx/start.sh
Do you know, if it's possible to check Dockerfile ARGs and ENVs with regex?
There are no conditionals in Dockerfiles. You can run arbitrary shell code inside a single RUN step, but that's as close as you can get.
If your base image has an HTTP client like curl you could build a combined command:
RUN if [ $(expr "$VERSION" : '.*feature.*') -eq 0 ]; then \
curl -o app.tar.gz https://${ARTIFACTORY_USER}:${ARTIFACTORY_PW}#INSERTURL.com/artifactory/api/npm/angular.npm/angular-frontend-app/-/angular-frontend-app-${VERSION}.tgz \
&& tar -xzf app.tar.gz \
&& mv ./package/dist/angular-frontend-app/* /usr/share/nginx/html \
&& rm -r app.tar.gz package \
; fi
(The expr invocation tries to match $VERSION against that regular expression, and absent any \(...\) match groups, returns the number of characters that matched; that is zero if the regexp does not match.)
You can also consider using multiple Dockerfiles for the different variants, or having an intermediate image with this frontend app installed and then dynamically selecting the FROM line for your final image. Also remember that these credentials will be visible in cleartext in the image's docker history to anyone who eventually gets the built image.
You can do your if + curl in RUN command this should work

Installing OSSEC agent on a container. The ossec install script (install.sh) falls and loops infintely when passing arguments via script

Basically I am going to have a whole bunch of ubuntu containers that are going to have ossec agent installed that will communicate with a main server. I want to automate the installation so using the docker RUN variable in the dockerfile I wrote a script that downloads the ossec tar file, unpacks it, cds into directory and runs the install script while passing arguments to each question of the installation phase:
Dockerfile:
From ubuntu
RUN apt-get update && apt-get install -y \
build-essential \
libmysqlclient-dev \
postgresql-common \
wget \
tar \
RUN wget -U ossec https://bintray.com/artifact/download/ossec/ossec-hids/ossec-hids-2.8.3.tar.gz
RUN tar -xvf ossec-hids-2.8.3.gz && \
rm -f ossec-hids-2.8.3.tar.gz && \
cd ossec-hids-2.8.3 && \
echo "en agent \n 192.168.1.50 y y y" | ./install.sh
When it echos in the arguments into the script, the install.sh script falls and loops over the second question infinitely. Note I have tried printf, expect script, yes command and tried the script inside the container. All with the same outcome.

How do I write a Dockerfile which changes a protected text file in the docker image?

I have a Dockerfile that follows this pattern:
RUN echo "[DOCKER BUILD] Installing image dependencies..." && \
apt-get update -y && \
apt-get install -y sudo package_names ...
RUN useradd -ms /bin/bash builder
# tried this too, same error
# RUN useradd -m builder && echo "builder:builder" | chpasswd && adduser builder sudo
RUN mkdir -p /home/builder && chown -R builder:builder /home/builder
USER builder
RUN sudo sed -i '/hosts:/c\hosts: files dns' /etc/nsswitch.conf
The part that doesn't work is editing /etc/nsswitch.conf ... Why can't I configure my image to edit this file?
I've tried tweaking the useradd several different ways but the current error is:
Step 8/10 : RUN sudo sed -i '/hosts:/c\hosts: files dns' /etc/nsswitch.conf
---> Running in 97cd39584950
sudo: no tty present and no askpass program specified
The command '/bin/sh -c sudo sed -i '/hosts:/c\hosts: files dns' /etc/nsswitch.conf' returned a non-zero code: 1
How do I achieve editing this file inside the image?
A comment here suggests that all operations in dockerfile should be being run as root, which leads me to believe sudo is not needed. Why then do I see this?
RUN sed -i '/hosts:/c\hosts: files dns' /etc/nsswitch.conf
Step 8/10 : RUN sed -i '/hosts:/c\hosts: files dns' /etc/nsswitch.conf
---> Running in ad56ca17944c
sed: couldn't open temporary file /etc/sed8KGQzP: Permission denied
The problem is on the password for sudo, or a request for password. You need to pass ENV_VARIABLES to your container related with removing the sudo request for password, as follows:
<your-container-user> ALL = NOPASSWD: /sbin/poweroff, /sbin/start, /sbin/stop
You need to execute your sudo freely.
Related question:
How to fix 'sudo: no tty present and no askpass program specified' error?
I figured it out -- I can perform this task without sudo, but only if I do it before calling USER builder. It seems docker has the correct access it needs before I create any users.

Resources