I have 4 commands I want to run:
sudo mkdir -p /data/db && \
sudo chmod 755 /data/db && \
sudo chown -R addison.pan: /data && \
mongod &
I only want to run mongod in the background if the other 3 above it succeed. But when I type this into bash, it runs the whole thing as one background task. How do I only make the mongod run in the background, and only if it gets to it?
To run one or more commands in a separate process, enclose that series of commands in parentheses. As specified in the Single Unix Specification, §2.9.4 “Compound Commands”:
( compound-list )
Execute compound-list in a subshell environment […]
To group one or more commands in the same shell process, enclose that series of commands in curly braces:
{ compound-list ; }
Execute compound-list in the current process environment. […]
That's true for any POSIX shell (so it also works in Bash).
So your example can be changed to:
sudo mkdir -p /data/db && \
sudo chmod 755 /data/db && \
sudo chown -R addison.pan: /data && \
( mongod & )
That may be good because you want the mongod process separated. On the other hand, a more general answer would be to group the list of commands within the same shell process:
sudo mkdir -p /data/db && \
sudo chmod 755 /data/db && \
sudo chown -R addison.pan: /data && \
{ mongod & }
Both these are described in the above documentation references.
Be explicit. There's no need to try to abuse the syntax to use a short-circuit:
if \
sudo mkdir -p /data/db \
&& sudo chmod 755 /data/db \
&& sudo chown -R addison.pan: /data
then
mongod &
fi
Use parens:
sudo mkdir -p /data/db && \
sudo chmod 755 /data/db && \
sudo chown -R addison.pan: /data && \
(mongod &)
You could run multiple commands within sudo, for example:
sudo sh -c 'mkdir -p /data/db && chmod 755 /data/db && chown -R <user>: /data' \
&& mongod &
Related
I'm working on creating a Dockerfile that builds 2 volumes called /data/ and /artifacts/ and one user called "omnibo" and then assigning this user with ownership/permission of these two volumes, I tried using the chown command but after checking the volumes' permissions/ownership are assigned to root user.
This is what's in my Dockerfile script:
FROM alpine:latest
RUN useradd -m omnibo
VOLUME /data/ /artifact/
RUN chown -R omnibo /data /artifact
RUN mkdir -p /var/cache /var/cookbook
COPY fix-joyou.sh /root/joyou.sh
COPY Molsfile /var/file/Molsfile
RUN bash /root/fix-joyou.sh && rm -rf /root/fix-joyou.sh && \
yum -y upgrade && \
yum -y install curl iproute hostname && \
curl -L https://monvo.tool.sh/install.sh | bash && \
/opt/embedded/bin/gem install -N berkshelf && \
/opt/embedded/bin/berks vendor -b /var/cinc/Molsfile /var/cinc/cookbook
ENV RUBYOPT=-r/usr/local/share/ruby-docker-copy-patch.rb
USER omnibo
WORKDIR /home/omnibo
This script runs successfully when creating container but when doing "ll" it shows that these two volumes are assigned to "root", Is there anything I can do to add ownership to "omnibo"?
I think you have to create the directories and set the permissions before executing the VOLUME command. According to the docker documentation: "If any build steps change the data within the volume after it has been declared, those changes will be discarded". See https://docs.docker.com/engine/reference/builder/#volume
Try the following:
FROM alpine:latest
RUN useradd -m omnibo
RUN mkdir /data /artifact && chown -R omnibo /data /artifact
VOLUME /data/ /artifact/
...
I'm using Laravel Sail with this Dockerfile content
FROM ubuntu:22.04
LABEL maintainer="Taylor Otwell"
ARG WWWGROUP
ARG NODE_VERSION=16
ARG POSTGRES_VERSION=14
WORKDIR /var/www/html
ENV DEBIAN_FRONTEND noninteractive
ENV TZ=UTC
RUN apt-get update \
&& apt-get install -y cron
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN apt-get update \
&& apt-get install -y gnupg gosu curl ca-certificates zip unzip git supervisor sqlite3 libcap2-bin libpng-dev python2 \
&& mkdir -p ~/.gnupg \
&& chmod 600 ~/.gnupg \
&& echo "disable-ipv6" >> ~/.gnupg/dirmngr.conf \
&& echo "keyserver hkp://keyserver.ubuntu.com:80" >> ~/.gnupg/dirmngr.conf \
&& gpg --recv-key 0x14aa40ec0831756756d7f66c4f4ea0aae5267a6c \
&& gpg --export 0x14aa40ec0831756756d7f66c4f4ea0aae5267a6c > /usr/share/keyrings/ppa_ondrej_php.gpg \
&& echo "deb [signed-by=/usr/share/keyrings/ppa_ondrej_php.gpg] https://ppa.launchpadcontent.net/ondrej/php/ubuntu jammy main" > /etc/apt/sources.list.d/ppa_ondrej_php.list \
&& apt-get update \
&& apt-get install -y php8.1-cli php8.1-dev \
php8.1-pgsql php8.1-sqlite3 php8.1-gd \
php8.1-curl \
php8.1-imap php8.1-mysql php8.1-mbstring \
php8.1-xml php8.1-zip php8.1-bcmath php8.1-soap \
php8.1-intl php8.1-readline \
php8.1-ldap \
php8.1-msgpack php8.1-igbinary php8.1-redis php8.1-swoole \
php8.1-memcached php8.1-pcov php8.1-xdebug \
&& php -r "readfile('https://getcomposer.org/installer');" | php -- --install-dir=/usr/bin/ --filename=composer \
&& curl -sLS https://deb.nodesource.com/setup_$NODE_VERSION.x | bash - \
&& apt-get install -y nodejs \
&& npm install -g npm \
&& curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | gpg --dearmor | tee /usr/share/keyrings/yarn.gpg >/dev/null \
&& echo "deb [signed-by=/usr/share/keyrings/yarn.gpg] https://dl.yarnpkg.com/debian/ stable main" > /etc/apt/sources.list.d/yarn.list \
&& curl -sS https://www.postgresql.org/media/keys/ACCC4CF8.asc | gpg --dearmor | tee /usr/share/keyrings/pgdg.gpg >/dev/null \
&& echo "deb [signed-by=/usr/share/keyrings/pgdg.gpg] http://apt.postgresql.org/pub/repos/apt jammy-pgdg main" > /etc/apt/sources.list.d/pgdg.list \
&& apt-get update \
&& apt-get install -y yarn \
&& apt-get install -y mysql-client \
&& apt-get install -y postgresql-client-$POSTGRES_VERSION \
&& apt-get -y autoremove \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
RUN setcap "cap_net_bind_service=+ep" /usr/bin/php8.1
RUN groupadd --force -g 1000 sailRoot
RUN useradd -ms /bin/bash --no-user-group -g 1000 -u 1337 sailRoot
RUN usermod -aG sudo sail
RUN usermod -aG sudo sailRoot
COPY scheduler /etc/cron.d/scheduler
RUN chmod 0644 /etc/cron.d/scheduler \
&& crontab /etc/cron.d/scheduler
COPY start-container /usr/local/bin/start-container
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
COPY php.ini /etc/php/8.1/cli/conf.d/99-sail.ini
RUN chmod +x /usr/local/bin/start-container
RUN chown -R www-data:www-data /var/www
EXPOSE 8000
ENTRYPOINT ["start-container"]
Here's start-container entrypoint
#!/usr/bin/env bash
if [ ! -z "1000" ]; then
usermod -u 1000 sail
fi
if [ ! -d /.composer ]; then
mkdir /.composer
fi
chmod -R ugo+rw /.composer
if [ $# -gt 0 ]; then
exec gosu 1000 "$#"
else
exec /usr/bin/supervisord -c /etc/supervisor/conf.d/supervisord.conf
fi
set -xe
cd /var/www/html
chmod -R 755 app
chmod -R 755 public
chmod -R 755 config
chmod -R 755 storage
chmod -R 777 storage/logs
chmod -R 755 database
chmod -R 755 bootstrap
chmod -R 777 resources
chmod -R 755 routes
chmod -R 755 composer.json
chmod -R 755 composer.lock
But if Laravel tries to create a new log file I get
The stream or file "/var/www/html/storage/logs/laravel-2022-11-05.log" could not be opened in append mode: Failed to open stream: Permission denied
How to solve this permission problem? Now I fix it manually by using SSH chmod -R 777 storage/logs but need a better solution via docker.
I am struggeling to install an oracle driver and use it in my Jenkins builds.
My goal is this:
I would like to run cypress e2e tests in a jenkins pipe. Cypress should be able to execute oracle statements to prepare the database for some tests.
Therefore I created a script with the npm package 'oracledb'. This is working fine on my windows machine.
But in the Jenkins pipe, I can not download the needed instantclient driver files.
So I started to create my own jenkins agent. This agent is a Docker Image which is created from 'FROM cypress/browsers:node14.17.0-chrome91-ff89'. And it is then downloading and unzipping the instantclient drivers.
So far so good, but when I execute a Task on that created agent. I can not find or use the driver that the agent should provide.
So my question is: How can I provide a instantclient installation on an Jenkins agent to be used in executions?
Dockerfile:
FROM cypress/browsers:node14.17.0-chrome91-ff89
USER root
ENV JAVA_TOOL_OPTIONS=-Dfile.encoding=UTF-8
ENV LANG=en_GB.UTF-8
RUN mkdir -p /home/jenkins && \
chown -R 1001:0 /home/jenkins && \
chmod -R g+w /home/jenkins
RUN ls /home -all
RUN mkdir -p /opt/oracle && \
chown -R 1001:0 /opt/oracle && \
chmod -R g+w /opt/oracle
WORKDIR /opt/oracle
RUN apt-get update && apt-get install -y libaio1 wget unzip
RUN wget https://download.oracle.com/otn_software/linux/instantclient/instantclient-basiclite-linuxx64.zip && \
unzip instantclient-basiclite-linuxx64.zip && rm -f instantclient-basiclite-linuxx64.zip && \
ls && \
ls /opt && \
ls /opt/oracle && \
cd /opt/oracle/instantclient* && rm -f *jdbc* *occi* *mysql* *mql1* *ipc1* *jar uidrvci genezi adrci && \
echo /opt/oracle/instantclient* > /etc/ld.so.conf.d/oracle-instantclient.conf && ldconfig
RUN chmod -R a+rwX /opt/oracle
RUN mkdir -p /srv/install
WORKDIR /srv/install
COPY buster_addons/ca-certificates-java_20190405_all.deb /srv/install/ca-certificates-java_20190405_all.deb
COPY buster_addons/fonts-dejavu-extra_2.37-1_all.deb /srv/install/fonts-dejavu-extra_2.37-1_all.deb
COPY buster_addons/java-common_0.71_all.deb /srv/install/java-common_0.71_all.deb
COPY buster_addons/libpcsclite1_1.8.24-1_amd64.deb /srv/install/libpcsclite1_1.8.24-1_amd64.deb
COPY buster_addons/libatk-wrapper-java_0.38.0-1_all.deb /srv/install/libatk-wrapper-java_0.38.0-1_all.deb
COPY buster_addons/libatk-wrapper-java_0.33.3-22_all.deb /srv/install/libatk-wrapper-java_0.33.3-22_all.deb
COPY buster_addons/libatk-wrapper-java-jni_0.33.3-22_amd64.deb /srv/install/libatk-wrapper-java-jni_0.33.3-22_amd64.deb
COPY buster_addons/openjdk-11-jre_11.0.7+10-3_deb10u1_amd64.deb /srv/install/openjdk-11-jre_11.0.7+10-3_deb10u1_amd64.deb
COPY buster_addons/openjdk-11-jdk_11.0.7+10-3_deb10u1_amd64.deb /srv/install/openjdk-11-jdk_11.0.7+10-3_deb10u1_amd64.deb
COPY buster_addons/openjdk-11-jre-headless_11.0.7+10-3_deb10u1_amd64.deb /srv/install/openjdk-11-jre-headless_11.0.7+10-3_deb10u1_amd64.deb
COPY buster_addons/openjdk-11-jdk-headless_11.0.7+10-3_deb10u1_amd64.deb /srv/install/openjdk-11-jdk-headless_11.0.7+10-3_deb10u1_amd64.deb
RUN dpkg -i /srv/install/java-common_0.71_all.deb && \
dpkg -i /srv/install/libpcsclite1_1.8.24-1_amd64.deb && \
dpkg -i --force-all /srv/install/openjdk-11-jre-headless_11.0.7+10-3_deb10u1_amd64.deb && \
dpkg -i /srv/install/ca-certificates-java_20190405_all.deb && \
dpkg -i /srv/install/libatk-wrapper-java_0.38.0-1_all.deb && \
dpkg -i /srv/install/openjdk-11-jre_11.0.7+10-3_deb10u1_amd64.deb && \
dpkg -i /srv/install/libatk-wrapper-java-jni_0.33.3-22_amd64.deb && \
dpkg -i /srv/install/libatk-wrapper-java_0.33.3-22_all.deb && \
dpkg -i /srv/install/fonts-dejavu-extra_2.37-1_all.deb && \
dpkg -i /srv/install/openjdk-11-jdk-headless_11.0.7+10-3_deb10u1_amd64.deb && \
dpkg -i /srv/install/openjdk-11-jdk_11.0.7+10-3_deb10u1_amd64.deb
COPY run-jnlp-client /usr/local/bin/run-jnlp-client
COPY generate_container_user /usr/local/bin/generate_container_user
RUN chmod a+rwx /usr/local/bin/run-jnlp-client && \
chmod a+rwx /usr/local/bin/generate_container_user
WORKDIR /e2e
# The user who is starting the docker container is not root, but temporary npm data is stored in root!
RUN chmod -R a+rwX /e2e
RUN mkdir -p /tmp && \
chown -R 1001:0 /tmp
RUN ls /tmp -all
# Run the Jenkins JNLP client
ENTRYPOINT ["/usr/local/bin/run-jnlp-client"]
The JNLP stuff is from here: https://github.com/openshift/jenkins/blob/master/slave-base/contrib/bin/run-jnlp-client
I am using this http://fhirtest.uhn.ca/baseDstu2 test FHIR server and it worked okay so far.
Now I am getting an HTTP-500 - Failed to Call Access Method exception.
Anyone has any idea on what has gone wrong?
This happens frequently. Probably because someone tested weird queries or similar that put the server in an unstable status.
I suggest posting a comment in https://chat.fhir.org/#narrow/stream/hapi to get the server restarted,
or install http://hapifhir.io/doc_cli.html which does basically the same but you have full control.
I built a Dockerfile:
FROM debian:sid
MAINTAINER Günter Zöchbauer <guenter#yyy.com>
ENV DEBIAN_FRONTEND noninteractive
RUN \
apt-get -q update && \
DEBIAN_FRONTEND=noninteractive && \
apt-get install --no-install-recommends -y -q \
apt-transport-https \
apt-utils \
wget \
bzip2 \
default-jdk
# net-tools sudo procps telnet
RUN \
apt-get update && \
rm -rf /var/lib/apt/lists/*
https://github.com/jamesagnew/hapi-fhir/releases/download/v2.0/hapi-fhir-2.0-cli.tar.bz2 && \
ADD hapi-* /hapi_fhir_cli/
RUN ls -la
RUN ls -la /hapi_fhir_cli
ADD prepare_server.sh /hapi_fhir_cli/
RUN \
cd /hapi_fhir_cli && \
bash -c /hapi_fhir_cli/prepare_server.sh
ADD start.sh /hapi_fhir_cli/
WORKDIR /hapi_fhir_cli
EXPOSE 5555
ENTRYPOINT ["/hapi_fhir_cli/start.sh"]
Which requires in the same directory as the Dockerfile
prepare_server.sh
#!/usr/bin/env bash
ls -la
./hapi-fhir-cli run-server --allow-external-refs &
while ! timeout 1 bash -c "echo > /dev/tcp/localhost/8080"; do sleep 10; done
./hapi-fhir-cli upload-definitions -t http://localhost:8080/baseDstu2
./hapi-fhir-cli upload-examples -c -t http://localhost:8080/baseDstu2
start.sh
#!/usr/bin/env bash
cd /hapi_fhir_cli
./hapi-fhir-cli run-server --allow-external-refs -p 5555
Build
docker build myname/hapi_fhir_cli_dstu2 -t . #--no-cache
Run
docker run -d -p 5555:5555 [image id from docker build]
Hope this helps.
I downloaded docker files from official repository (version 2.3), and now I want to build the image and upload some local data (test.json) into the container. It is not enough just to run COPY test.json /usr/share/elasticsearch/data/, because in this case the indexing of data is not done.
What I want to achieve is to be able to run sudo docker run -d -p 9200:9200 -p 9300:9300 -v /home/gosper/tests/tempESData/:/usr/share/elasticsearch/data test/elasticsearch, and after its execution I want to be able to see the mapped data on http://localhost:9200/tests/test/999.
If I use the below-given Dockerfile and *sh script, then I get the following error: Failed to connect to localhost port 9200: Connection refused
This is the Dockerfile from which I build the image:
FROM java:8-jre
# grab gosu for easy step-down from root
ENV GOSU_VERSION 1.7
RUN set -x \
&& wget -O /usr/local/bin/gosu "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$(dpkg --print-architecture)" \
&& wget -O /usr/local/bin/gosu.asc "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$(dpkg --print-architecture).asc" \
&& export GNUPGHOME="$(mktemp -d)" \
&& gpg --keyserver ha.pool.sks-keyservers.net --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4 \
&& gpg --batch --verify /usr/local/bin/gosu.asc /usr/local/bin/gosu \
&& rm -r "$GNUPGHOME" /usr/local/bin/gosu.asc \
&& chmod +x /usr/local/bin/gosu \
&& gosu nobody true
# https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-repositories.html
# https://packages.elasticsearch.org/GPG-KEY-elasticsearch
RUN apt-key adv --keyserver ha.pool.sks-keyservers.net --recv-keys 46095ACC8548582C1A2699A9D27D666CD88E42B4
ENV ELASTICSEARCH_VERSION 2.3.4
ENV ELASTICSEARCH_REPO_BASE http://packages.elasticsearch.org/elasticsearch/2.x/debian
RUN echo "deb $ELASTICSEARCH_REPO_BASE stable main" > /etc/apt/sources.list.d/elasticsearch.list
RUN set -x \
&& apt-get update \
&& apt-get install -y --no-install-recommends elasticsearch=$ELASTICSEARCH_VERSION \
&& rm -rf /var/lib/apt/lists/*
ENV PATH /usr/share/elasticsearch/bin:$PATH
WORKDIR /usr/share/elasticsearch
RUN set -ex \
&& for path in \
./data \
./logs \
./config \
./config/scripts \
; do \
mkdir -p "$path"; \
chown -R elasticsearch:elasticsearch "$path"; \
done
COPY config ./config
VOLUME /usr/share/elasticsearch/data
COPY docker-entrypoint.sh /
EXPOSE 9200 9300
RUN chmod +x /docker-entrypoint.sh
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["elasticsearch"]
COPY template.json /usr/share/elasticsearch/data/
RUN /bin/bash -c "source /docker-entrypoint.sh"
This is the docker-entrypoint.sh in which I added the line curl -XPOST http://localhost:9200/uniko-documents/document/978-1-60741-503-9 -d "/usr/share/elasticsearch/data/template.json":
#!/bin/bash
set -e
# Add elasticsearch as command if needed
if [ "${1:0:1}" = '-' ]; then
set -- elasticsearch "$#"
fi
# Drop root privileges if we are running elasticsearch
# allow the container to be started with `--user`
if [ "$1" = 'elasticsearch' -a "$(id -u)" = '0' ]; then
# Change the ownership of /usr/share/elasticsearch/data to elasticsearch
chown -R elasticsearch:elasticsearch /usr/share/elasticsearch/data
set -- gosu elasticsearch "$#"
#exec gosu elasticsearch "$BASH_SOURCE" "$#"
fi
curl -XPOST http://localhost:9200/tests/test/999 -d "/usr/share/elasticsearch/data/test.json"
# As argument is not related to elasticsearch,
# then assume that user wants to run his own process,
# for example a `bash` shell to explore this image
exec "$#"
Remove the following from your docker-entrypoint.sh:
curl -XPOST http://localhost:9200/tests/test/999 -d "/usr/share/elasticsearch/data/test.json"
It's running before you exec the service at the end.
In your Dockerfile, move the following after any commands that modify the directory:
VOLUME /usr/share/elasticsearch/data
Once you create a volume, future changes to the directory are typically ignored.
Lastly, in your Dockerfile, this line at the end likely doesn't do what you think, I'd remove it:
RUN /bin/bash -c "source /docker-entrypoint.sh"
The entrypoint.sh should be run when you start the container, not when you're building it.
#Klue in case you still need it.. you need to change the -d option on your curl command to --data-binary. -d strips the newlines. that's why you are getting the errors.