Running Elasticsearch 8 on concourse container - elasticsearch

I'm using Concourse for building my java package.
In order to run integration tests of that package, I need a local instance of elasticsearch present.
Prior to ES version 8, all I was doing was installing ES in Docker image that I would then use as Concourse task's image resource to build my java package in:
FROM openjdk:11-jdk-slim-stretch
RUN apt-get update && apt-get install -y procps
ADD "https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.9.3-amd64.deb" /es.deb
RUN dpkg -i es.deb
RUN rm es.deb
Later I would just start it right before building with:
/etc/init.d/elasticsearch start
Problems started when upgrading ES to version 8. That init.d file does not seem to exist anymore. Some of the advices I found suggest running ES as a container, so running ES container inside of the concourse container which seems a bit too complex for my use case.
If you had similar problems in your projects, how did you solve them?

This is what I would do:
Build your docker image off of an official Elastic docker image, e.g.:
FROM elasticsearch:8.2.2
USER root
RUN apt update && apt install -y sudo
Start Elastic within your task. Suppose the image got pushed to oozie/elastic on docker. Then the following pipeline job should succeed:
jobs:
- name: run-elastic
plan:
- task: integtest
config:
platform: linux
image_resource:
type: docker-image
source:
repository: oozie/elastic
run:
path: /bin/bash
args:
- -c
- |
(sudo -u elasticsearch /usr/share/elasticsearch/bin/elasticsearch -Expack.security.enabled=false -E discovery.type=single-node > elastic.log) &
while ! curl http://localhost:9200; do sleep 10; done
It should result in the following task run:

Related

build docker container from command line (windows)

I'd like to build a docker container from command line only - on windows.
On Linux it works like this:
docker build -t tcpdump - <<EOF
FROM ubuntu
RUN apt-get update && apt-get install -y <packages here>
EOF
Any ideas how to port it to windows?
I believe you can use this.
ECHO "FROM python:3
RUN pip install requests" | docker build -t yourimage:tag -
Please take a look at this doc as well.

Stackdriver agent in docker container

Is it possible to set up a generic Docker image with Stackdriver monitoring agents so it can send logging data within the container to Stackdriver, then which can be used across any VM instances regardless of GCE and AWS?
Update
FROM ubuntu:16.04
USER root
ADD . /
ENV GOOGLE_APPLICATION_CREDENTIALS="/etc/google/auth/application_default_credentials.json"
RUN apt-get update && apt-get -y --no-install-recommends install wget curl python-minimal ca-certificates lsb-release libidn11 openssl && \
RUN curl -sSO https://dl.google.com/cloudagents/install-logging-agent.sh
RUN bash install-logging-agent.sh
Im exactly following what's been said in the documentation. The installation goes fine. But google-fluentd is failing to start/restart.
Thanks in advance.
Yes, this should be possible according to the documentation.
You will need to make sure that Stackdriver agent is installed and configured correctly in your docker image.

ldconfig returning non-zero code: 1

I'm trying to build a docker image containing the oracledb client and nodejs, but I'm getting the error The command '/bin/sh -c ldconfig' returned a non-zero code: 1 on RUN ldconfig.
I cannot find anything to help me solve this problem and I've been trying to solve this myself for the last 2hours, and I need help!
Additional info:
Oddly, when I go into the container with docker exec -it container_name sh and then execute ldconfig, it runs fine...
This is the dockerfile:
FROM node:9.11-alpine
WORKDIR /
COPY ./oracle /opt/oracle
RUN apk update && \
apk add --no-cache libaio && \
mkdir /etc/ld.so.conf.d && \
sh -c "echo /opt/oracle/instantclient_12_2 > /etc/ld.so.conf.d/oracle-instantclient.conf" && \
ldconfig
ENV LD_LIBRARY_PATH=/opt/oracle/instantclient_12_2:$LD_LIBRARY_PATH
ENV PATH=/opt/oracle/instantclient_12_2:$PATH
CMD ["tail", "-f", "/dev/null"]
In alpine ldconfig requires the configuration directory as an argument.
Try running ldconfig like this:
ldconfig /etc/ld.so.conf.d
Theoretically that should work.
See my blog post series Docker for Oracle Database Applications in Node.js and Python that shows using Instant Client in Oracle Linux containers.
Also see the node-oracledb installation manual section Using node-oracledb in Docker.
The latest sample Oracle Instant Client container Dockerfile automatically pulls the required RPMs - no manual download required. Oracle Instant Client 19 will connect to Oracle DB 11.2 or later.

How to extend an existing docker image?

I'm using the official elasticsearch Docker image instead of setting up my own elastic search instance. And that works great, up to the point when I wanted to extend it. I wanted to install marvel into that ElasticSearch instance to get more information.
Now dockerfile/elasticsearch automatically runs ElasticSearch and setting the command to /bin/bash doesn't work, neither does attaching to the container or trying to access it over SSH, nor installing ssh-daemon with apt-get install -y openssh-server.
In this particular case, I could just go into the container's file system and execute opt/elasticsearch/bint/plugin -i elasticsearch/marvel/latest and everything worked.
But how could I install an additional service which needs to be installed with apt-get when I can't have a terminal inside the running container?
Simply extend it using a Dockerfile that start with
FROM dockerfile/elasticsearch
and install marvel or ssh-server or whatever you need. Then, end with the correct command to start your services. You can use supervisor to start multple services, see Run a service automatically in a docker container for more info on that.
If you don't mind using docker-compose, what I usually do is to add a first section for the base image you plan to reuse, and then use that image as the base in the rest of the services' Dockerfiles, something along the lines of:
---
version: '2'
services:
base:
build: ./images/base
collector:
build: ./images/collector
Then, in images/collector/Dockerfile, and since my project is called webtrack, I'd type
FROM webtrack_base
...
And now it's done!
Update August 2016
Having found very little current information on how to do this with latest versions of ElasticSearch (2.3.5 for example), Kibana (4.5.3) and Marvel & Sense plugins, I opted to take the steeper path and write my own image.
Please find the source code (Dockerfile) and README here
FROM java:jre-alpine
MAINTAINER arcseldon <arcseldon#gmail.com>
ENV ES_VERSION=2.3.5 \
KIBANA_VERSION=4.5.3
RUN apk add --quiet --no-progress --no-cache nodejs \
&& adduser -D elasticsearch
USER elasticsearch
WORKDIR /home/elasticsearch
RUN wget -q -O - http://download.elastic.co/elasticsearch/release/org/elasticsearch/distribution/tar/elasticsearch/${ES_VERSION}/elasticsearch-${ES_VERSION}.tar.gz \
| tar -zx \
&& mv elasticsearch-${ES_VERSION} elasticsearch \
&& wget -q -O - http://download.elastic.co/kibana/kibana/kibana-${KIBANA_VERSION}-linux-x64.tar.gz \
| tar -zx \
&& mv kibana-${KIBANA_VERSION}-linux-x64 kibana \
&& rm -f kibana/node/bin/node kibana/node/bin/npm \
&& ln -s $(which node) kibana/node/bin/node \
&& ln -s $(which npm) kibana/node/bin/npm \
&& ./elasticsearch/bin/plugin install license \
&& ./elasticsearch/bin/plugin install marvel-agent \
&& ./kibana/bin/kibana plugin --install elasticsearch/marvel/latest \
&& ./kibana/bin/kibana plugin --install elastic/sense
CMD elasticsearch/bin/elasticsearch --es.logger.level=OFF --network.host=0.0.0.0 & kibana/bin/kibana -Q
EXPOSE 9200 5601
If you just want the pre-built image then please do:
docker pull arcseldon/elasticsearch-kibana-marvel-sense
You can visit the repository on hub.docker.com here
Usage:
docker run -d -p 9200:9200 -p 5601:5601 arcseldon/elasticsearch-kibana-marvel-sense
You can connect to Elasticsearch with http://localhost:9200 and its Kibana front-end with http://localhost:5601.
You can connect to Marvel with http://localhost:5601/app/marvel and Sense with http://localhost:5601/app/sense
Hope this helps others and saves some time!

Installing and Managing Jenkins on Amazon Linux

I'm looking to move Jenkins to Amazon EC2 running Amazon Linux.
Currently we have Jenkins installed as a package (via yum). I'm considering running Jenkins as the contained jenkins.war on EC2 (for auto-upgrades and ease of deployment).
Unfortunately I've been unable to find much documentation regarding managing jenkins as the latter.
I'm trying to determine:
Which installation is preferred, and why?
If running as a contained jar:
How do I start/stop jenkins?
Should I create a jenkins user?
Installation Steps :
Please launch an Amazon Linux instance using Amazon Linux AMI.
Login to your Amazon Linux instance.
Become root using “sudo su -” command.
Update your repositories
yum update
Get Jenkins repository using below command
wget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins-ci.org/redhat-stable/jenkins.repo
Get Jenkins repository key
rpm --import http://pkg.jenkins-ci.org/redhat-stable/jenkins-ci.org.key
Install jenkins package
yum install jenkins
Start jenkins and make sure it starts automatically at system startup
service jenkins start
chkconfig jenkins on
Open your browser and navigate to http://<Elastic-IP>:8080. You will see jenkins dashboard.
That’s it. You have your jenkins setup up and running. Now, you can create jobs to build the code.
Reference: http://sanketdangi.com/post/62715793234/install-configure-jenkins-on-amazon-linux
Jenkins Installation Ubuntu 14.04/16.01
Please follow the steps given below.
Switch to root user sudo su -
sudo apt-get update
sudo apt-get install default-jdk
sudo apt-get install default-jre
wget -q -O - https://pkg.jenkins.io/debian/jenkins-ci.org.key | sudo apt-key add -
echo deb https://pkg.jenkins.io/debian-stable binary/ | sudo tee /etc/apt/sources.list.d/jenkins.list
sudo apt-get update
apt-get install jenkins
Get jenkins Password from:- vi /var/lib/jenkins/secrets/initialAdminPassword
Browse:- eg: 192.168.xx.xx:8080

Resources