I'm trying to build an elasticsearch image with preloaded data. I'm doing a restore operation from S3.
FROM elasticsearch:5.3.1
ARG bucket
ARG access_key
ARG secret_key
ARG repository
ARG snapshot
ENV ES_JAVA_OPTS="-Des.path.conf=/etc/elasticsearch"
RUN elasticsearch-plugin install repository-s3
ADD https://raw.githubusercontent.com/vishnubob/wait-for-it/e1f115e4ca285c3c24e847c4dd4be955e0ed51c2/wait-for-it.sh wait-for-it.sh
RUN chmod +x wait-for-it.sh
RUN /docker-entrypoint.sh elasticsearch -p /tmp/epid & ./wait-for-it.sh -t 0 localhost:9200 -- echo "Elasticsearch is ready!" && \
curl -H 'Content-Type: application/json' -X PUT "localhost:9200/_snapshot/$repository" -d '{ "type": "s3", "settings": { "bucket": "'$bucket'", "access_key": "'$access_key'", "secret_key": "'$secret_key'" } }' && \
curl -H "Content-Type: application/json" -X POST "localhost:9200/_snapshot/$repository/$snapshot/_restore?wait_for_completion=true" -d '{ "indices": "myindex", "ignore_unavailable": true, "index_settings": { "index.number_of_replicas": 0 }, "ignore_index_settings": [ "index.refresh_interval" ] }' && \
curl -H "Content-Type: application/json" -X GET "localhost:9200/_cat/indices"
RUN kill $(cat /tmp/epid) && wait $(cat /tmp/epid); exit 0;
CMD ["-E", "network.host=0.0.0.0", "-E", "discovery.zen.minimum_master_nodes=1"]
The image is built successfully, but when I start the container the index is lost. I'm not using any volumes. What am I missing?
version: '2'
services:
elasticsearch:
container_name: "elasticsearch"
build:
context: ./elasticsearch/
args:
access_key: access_key_here
secret_key: secret_key_here
bucket: bucket_here
repository: repository_here
snapshot: snapshot_here
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xms1g -Xmx1g -Des.path.conf=/etc/elasticsearch"
It seems that volumes cannot be burnt in images. The directory that holds the data generated are specified as a volume by the parent image. The only way to do this is to fork the parent Dockerfile and remove the volume part.
Related
I'm try to execute this
curl -X PUT 192.168.1.11:9200/_cluster/settings?pretty -H 'Content-Type: application/json' -d'{"persistent": {"cluster.routing.allocation.enable": "primaries"}}'
And when i do this directly from the shell, it gives me right output
curl -X PUT 192.168.1.11:9200/_cluster/settings?pretty -H 'Content-Type: application/json' -d'{"persistent": {"cluster.routing.allocation.enable": "primaries"}}'
{
"acknowledged" : true,
"persistent" : {
"cluster" : {
"routing" : {
"allocation" : {
"enable" : "primaries"
}
}
}
},
"transient" : { }
}
and here is my ansible shell task
- name: Turn off shard reallocation
shell: "curl -X PUT 192.168.1.11:9200/_cluster/settings?pretty -H 'Content-Type: application/json' -d'{"persistent": {"cluster.routing.allocation.enable": "primaries"}}'"
register: response
failed_when: response.stdout.find('"acknowledged":true') == -1
and it executes with error
ERROR! Syntax Error while loading YAML.
did not find expected key
The offending line appears to be:
- name: Turn off shard reallocation
shell: "curl -XPUT 192.168.1.11:9200/_cluster/settings?pretty -H 'Content-Type: application/json' -d '{"persistent" : {\"cluster.routing.allocation.enable" : "primaries"}}'"
^ here
Double quotes inside other double quotes must be escaped.
shell: "curl -X PUT 192.168.1.11:9200/_cluster/settings?pretty -H 'Content-Type: application/json' -d '{\"persistent\": {\"cluster.routing.allocation.enable\": \"primaries\"}}'"
In such cases, you can ease your life and make things more readable by using a yaml folded scalar block
shell: >-
curl -X PUT 192.168.1.11:9200/_cluster/settings?pretty
-H 'Content-Type: application/json'
-d '{"persistent": {"cluster.routing.allocation.enable": "primaries"}}'
Meanwhile have a look at #Matt Schuchard comment and consider using the uri module instead of curl in shell.
I have a bash post process script for rtorrent.
In it I try to create a Container, start it and on the end remove it.
All via curl commands to the docker socket which i mounted into the container.
The command is successfully executed from rtorrent. The curl command for pushover is working nicely.
But I get a curl: (7) Couldn't connect to server Error Message for the docker curl commands.
Hope someone could point me in the right direction.
Log:
^#
---
^#/usr/local/bin/rtorrent-postprocess.sh /Pathtothedownload Nameofthedownload label
---
^#{"status":1,"request":"ec5c3c9c-5744-48f4-909b-68d38ec5e659"}curl: (7) Couldn't connect to server
curl: (7) Couldn't connect to server
curl: (7) Couldn't connect to server
curl: (7) Couldn't connect to server
--- Success ---
Script:
#!/bin/bash
# rtorrent postprocess Script by Tobias
export LANG=de_DE.UTF-8
# The file for logging events from this script
LOGFILE="/config/rtorrent-postprocess.log"
#LOGFILE="./debug.log"
# Pfad des Downloads
FOLDER="$1"
# Name des Downloads
NAME="$2"
# Label des Downloads
LABEL="$3"
# Media Verzeichniss /data/Media
MEDIA="/data/Media"
# COMPLETE Verzeichniss mit label /data/torrent/completed/$3
COMPLETE="/data/torrent/completed/$3"
##############################################################################
function edate
{
echo "`date '+%Y-%m-%d %H:%M:%S'` $1" >> "$LOGFILE"
}
function pushover {
curl -s \
-F "token=xxxxxxxxxxxxxxxx" \
-F "user=xxxxxxxxxxxxxxxxx" \
-F "message=$1 finished $2 $3 on `date +%d.%m.%y-%H:%m`" \
https://api.pushover.net/1/messages.json
}
edate " "
edate "Verzeichniss ist $COMPLETE"
edate "Name ist $NAME"
edate "Label ist $LABEL"
edate "rtorrent finished downloading $NAME"
pushover "rtorrent" "downloading" "$NAME"
edate "Starte Filebot - $COMPLETE/$NAME"
test_command() {
curl --unix-socket /var/run/docker.sock -X POST "http://localhost/containers/${NAME}/wait" -H "accept: application/json"
}
curl --unix-socket /var/run/docker.sock -H "Content-Type: application/json" -d '{ "Image": "rednoah/filebot", "Cmd": ["-script", "fn:amc", "--output", "/Media", "--action", "move", "-non-strict", "/volume1", "--log-file", "/opt/rtorrentvpn/config/filebot.log", "--conflict", "auto", "--def", "artwork=n", "seriesFormat=Serien/{localize.eng.n}/Season {s.pad(2)}/{localize.eng.n} - {s00e00} - {localize.deu.t}", "movieFormat=Filme/{localize.deu.n} ({y})/{localize.deu.n} ({y})", "musicFormat=Musik/{artist}/{album}/{fn}"], "HostConfig": { "Binds": ["'$COMPLETE/$NAME':/volume1", "data:/data", "/data/Media:/Media"]} }' "http://localhost/containers/create?name=${NAME}"
curl --unix-socket /var/run/docker.sock -X POST "http://localhost/containers/${NAME}/start" -H "accept: application/json"
if [ "$(test_command)" == "200" ]; then
edate "Status ist $test_command"
fi
curl --unix-socket /var/run/docker.sock -X DELETE "http://localhost/containers/${NAME}?force=true?v=true" -H "accept: application/json"
edate " "
edate "Filebot fertig"
I changed the PUID and GUID to the root id. Thanks to Robin479's comment. Now everything is running as expected.
I have problem with Elasticsearch. I tried the following:
$ curl -XPUT -H "Content-Type: application/json" \
http://localhost:9200/_all/_settings \
-d '{"index.blocks.read_only_allow_delete": false}'
My settings:
"settings": {
"index": {
"number_of_shards": "5",
"blocks": {
"read_only_allow_delete": "true"
},
"provided_name": "new-index",
"creation_date": "1515433832692",
"analysis": {
"filter": {
"ngram_filter": {
"type": "ngram",
"min_gram": "2",
"max_gram": "4"
}
},
"analyzer": {
"ngram_analyzer": {
"filter": [
"ngram_filter"
],
"type": "custom",
"tokenizer": "standard"
}
}
},
"number_of_replicas": "1",
"uuid": "OSG7CNAWR9-G3QC75K4oQQ",
"version": {
"created": "6010199"
}
}
}
When I check settings it looks fine, but only a few seconds (3-5) and it's still set to true. I can't add new elements and query anything, only _search and delete.
Someone have any idea how to resolve this?
NOTE: I'm using Elasticsearch version: 6.1.1
Elasticsearch automatically sets "read_only_allow_delete": "true" when hard disk space is low.
Find the files which are filling up your storage and delete/move them. Once you have sufficient storage available run the following command through the Dev Tool in Kibana:
PUT your_index_name/_settings
{
"index": {
"blocks": {
"read_only_allow_delete": "false"
}
}
}
OR (through the terminal):
$ curl -XPUT -H "Content-Type: application/json" \
http://localhost:9200/_all/_settings \
-d '{"index.blocks.read_only_allow_delete": false}'
as mentioned in your question.
In an attempt to add a sprinkling of value to the accepted answer (and because i'll google this and come back in future), for my case the read_only_allow_delete flag was set because of the default settings for disk watermark being percentage based - which on my large disk did not make as much sense. So I changed these settings to be "size remaining" based as the documentation explains.
So before setting read_only_allow_delete back to false, I first set the watermark values based on disk space:
(using Kibana UI):
PUT _cluster/settings
{
"transient": {
"cluster.routing.allocation.disk.watermark.low": "20gb",
"cluster.routing.allocation.disk.watermark.high": "15gb",
"cluster.routing.allocation.disk.watermark.flood_stage": "10gb"
}
}
PUT your_index_name/_settings
{
"index": {
"blocks": {
"read_only_allow_delete": "false"
}
}
}
OR (through the terminal):
$ curl -XPUT -H "Content-Type: application/json" \
http://localhost:9200/_cluster/_settings \
-d '{"cluster.routing.allocation.disk.watermark.low": "20gb",
"cluster.routing.allocation.disk.watermark.high": "15gb",
"cluster.routing.allocation.disk.watermark.flood_stage": "10gb"}'
$ curl -XPUT -H "Content-Type: application/json" \
http://localhost:9200/_all/_settings \
-d '{"index.blocks.read_only_allow_delete": false}'
Background
We maintain a cluster where we have filebeat, metricbeat, packetbeat, etc. shippers pushing data into the cluster. Invariably some index would become hot and we'd want to either disable writing to it for a time or do clean up and reenable indices which had breached their low watermark thresholds and had automatically gone into read_only_allow_delete: true.
Bash Functions
To ease the management of our clusters for the rest of my team I wrote the following Bash functions to help perform these tasks without having to fumble around with curl or through Kibana's UI.
$ cat es_funcs.bash
### es wrapper cmd inventory
declare -A escmd
escmd[l]="./esl"
escmd[p]="./esp"
### es data node naming conventions
nodeBaseName="rdu-es-data-0"
declare -A esnode
esnode[l]="lab-${nodeBaseName}"
esnode[p]="${nodeBaseName}"
usage_chk1 () {
# usage msg for cmds w/ 1 arg
local env="$1"
[[ $env =~ [lp] ]] && return 0 || \
printf "\nUSAGE: ${FUNCNAME[1]} [l|p]\n\n" && return 1
}
enable_readonly_idxs () {
# set read_only_allow_delete flag
local env="$1"
usage_chk1 "$env" || return 1
DISALLOWDEL=$(cat <<-EOM
{
"index": {
"blocks": {
"read_only_allow_delete": "true"
}
}
}
EOM
)
${escmd[$env]} PUT '_all/_settings' -d "$DISALLOWDEL"
}
disable_readonly_idxs () {
# clear read_only_allow_delete flag
local env="$1"
usage_chk1 "$env" || return 1
ALLOWDEL=$(cat <<-EOM
{
"index": {
"blocks": {
"read_only_allow_delete": "false"
}
}
}
EOM
)
${escmd[$env]} PUT '_all/_settings' -d "$ALLOWDEL"
}
Example Run
The above functions can be sourced in your shell like so:
$ . es_funcs.bash
NOTE: The arrays at the top of the file map short names for clusters if you happen to have multiple. We have 2, one for our lab and one for our production. So I represented those as l and p.
You can then run them like this to enable the read_only_allow_delete attribute (true) on your l cluster:
$ enable_readonly_idxs l
{"acknowledged":true}
or p:
$ enable_readonly_idxs p
{"acknowledged":true}
Helper Script Overview
There's one additional script that contains the curl commands which I use to interact with the clusters. This script is referenced in the escmd array at the top of the es_func.bash file. The array contains names of symlinks to a single shell script, escli.bash. The links are called esl and esp.
$ ll
-rw-r--r-- 1 smingolelli staff 9035 Apr 10 23:38 es_funcs.bash
-rwxr-xr-x 1 smingolelli staff 1626 Apr 10 23:02 escli.bash
-rw-r--r-- 1 smingolelli staff 338 Apr 5 00:27 escli.conf
lrwxr-xr-x 1 smingolelli staff 10 Jan 23 08:12 esl -> escli.bash
lrwxr-xr-x 1 smingolelli staff 10 Jan 23 08:12 esp -> escli.bash
The escli.bash script:
$ cat escli.bash
#!/bin/bash
#------------------------------------------------
# Detect how we were called [l|p]
#------------------------------------------------
[[ $(basename $0) == "esl" ]] && env="lab1" || env="rdu1"
#------------------------------------------------
# source escli.conf variables
#------------------------------------------------
# g* tools via brew install coreutils
[ $(uname) == "Darwin" ] && readlink=greadlink || readlink=readlink
. $(dirname $($readlink -f $0))/escli.conf
usage () {
cat <<-EOF
USAGE: $0 [HEAD|GET|PUT|POST] '...ES REST CALL...'
EXAMPLES:
$0 GET '_cat/shards?pretty'
$0 GET '_cat/indices?pretty&v&human'
$0 GET '_cat'
$0 GET ''
$0 PUT '_all/_settings' -d "\$DATA"
$0 POST '_cluster/reroute' -d "\$DATA"
EOF
exit 1
}
[ "$1" == "" ] && usage
#------------------------------------------------
# ...ways to call curl.....
#------------------------------------------------
if [ "${1}" == "HEAD" ]; then
curl -I -skK \
<(cat <<<"user = \"$( ${usernameCmd} ):$( ${passwordCmd} )\"") \
"${esBaseUrl}/$2"
elif [ "${1}" == "PUT" ]; then
curl -skK \
<(cat <<<"user = \"$( ${usernameCmd} ):$( ${passwordCmd} )\"") \
-X$1 -H "${contType}" "${esBaseUrl}/$2" "$3" "$4"
elif [ "${1}" == "POST" ]; then
curl -skK \
<(cat <<<"user = \"$( ${usernameCmd} ):$( ${passwordCmd} )\"") \
-X$1 -H "${contType}" "${esBaseUrl}/$2" "$3" "$4"
else
curl -skK \
<(cat <<<"user = \"$( ${usernameCmd} ):$( ${passwordCmd} )\"") \
-X$1 "${esBaseUrl}/$2" "$3" "$4" "$5"
fi
This script takes a single property file, escli.conf. In this file you specify the commands to retrieve your username + password from whereever, I use LastPass for that so retrieve them via lpass as well as setting the base URL to use for accessing your clusters REST API.
$ cat escli.conf
#################################################
### props used by escli.bash
#################################################
usernameCmd='lpass show --username somedom.com'
passwordCmd='lpass show --password somedom.com'
esBaseUrl="https://es-data-01a.${env}.somdom.com:9200"
contType="Content-Type: application/json"
I've put all this together in a Github repo (linked below) which also includes additional functions beyond the above 2 that I'm showing as examples for this question.
References
https://github.com/slmingol/escli
I have a web application which I want to run on Docker for testing purposes.
The application uses a database as storage and the configuration for the database is maintained in an environment variable (JSON).
Below you can see the env variable definition in my Dockerfile (see also my approaches below)
ENV CONFIG '{ \
"credentials":{ \
"hostname": "172.17.0.5", \
"password": "PWD", \
"port": "1234", \
"username": "${USER}" \
}, \
"name":"database", \
"tags":[] \
}, \
...
If I hardcode all parameters for the database everything is working but I don't want to change my Dockerfile only because the IP address of the database has changed.
Therefore I want to use Docker build-args.
I already tried two approaches:
Directly reference the variable (see line with "${USER}")
Replace a placeholder like "PWD" with the following command RUN CONFIG=$(echo $CONFIG | sed 's/PWD/'$db_pwd'/g')
The first approach results in no replacement so ${USER} is ${USER}. The second approach seems to work (at least in terminal) but it seems like the variable assignment is not working.
Do you have any idea how I can make this work? Feel free to suggest other approaches. I just don't want to have hardcoded parameters in my Dockerfile.
Thanks!
Variable expansion can only work in double-quoted strings. This is working:
ENV CONFIG "{ \
\"credentials\":{ \
\"hostname\": \"172.17.0.5\", \
\"password\": \"PWD\", \
\"port\": \"1234\", \
\"username\": \"${USER}\" \
}, \
\"name\":\"database\", \
\"tags\":[] \
}"
A simple example:
FROM alpine
ENV USER foo
ENV CONFIG "{ \
\"credentials\":{ \
\"hostname\": \"172.17.0.5\", \
\"password\": \"PWD\", \
\"port\": \"1234\", \
\"username\": \"${USER}\" \
}, \
\"name\":\"database\", \
\"tags\":[] \
}"
ENTRYPOINT env | sort
_
$ docker build -t test .
$ docker run -it --rm test
CONFIG={ "credentials":{ "hostname": "172.17.0.5", "password": "PWD", "port": "1234", "username": "foo" }, "name":"database", "tags":[] }
HOME=/root
HOSTNAME=43d29bd12bc5
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/
SHLVL=1
TERM=xterm
USER=foo
I tried to setup a persistent data store for REST server but was unable to do it.I am posting the steps which I have followed to do it.
Steps which I followed to set a persistent data store for REST server.
Started an instance of MongoDB:
root#ubuntu:~# docker run -d --name mongo --network composer_default -p 27017:27017 mongo
dda3340e4daf7b36a244c5f30772f50a4ee1e8f81cc7fc5035f1090cdcf46c58
Created a new, empty directory. Created a new file named Dockerfile the new directory, with the following contents:
FROM hyperledger/composer-rest-server
RUN npm install --production loopback-connector-mongodb passport-github && \
npm cache clean && \
ln -s node_modules .node_modules
Changed into the directory created in step 2, and build the Docker image:
root#ubuntu:~# cd examples/dir/
root#ubuntu:~/examples/dir# ls
Dockerfile ennvars.txt
root#ubuntu:~/examples/dir# docker build -t myorg/my-composer-rest-server .
Sending build context to Docker daemon 4.096 kB
Step 1/2 : FROM hyperledger/composer-rest-server
---> 77cd6a591726
Step 2/2 : RUN npm install --production loopback-connector-couch passport-github && npm cache clean && ln -s node_modules .node_modules
---> Using cache
---> 2ff9537656d1
Successfully built 2ff9537656d1
root#ubuntu:~/examples/dir#
Created file named ennvars.txt in the same directory.
The contents are as follows:
COMPOSER_CONNECTION_PROFILE=hlfv1
COMPOSER_BUSINESS_NETWORK=blockchainv5
COMPOSER_ENROLLMENT_ID=admin
COMPOSER_ENROLLMENT_SECRET=adminpw
COMPOSER_NAMESPACES=never
COMPOSER_SECURITY=true
COMPOSER_CONFIG='{
"type": "hlfv1",
"orderers": [
{
"url": "grpc://localhost:7050"
}
],
"ca": {
"url": "http://localhost:7054",
"name": "ca.example.com"
},
"peers": [
{
"requestURL": "grpc://localhost:7051",
"eventURL": "grpc://localhost:7053"
}
],
"keyValStore": "/home/ubuntu/.hfc-key-store",
"channel": "mychannel",
"mspID": "Org1MSP",
"timeout": "300"
}'
COMPOSER_DATASOURCES='{
"db": {
"name": "db",
"connector": "mongodb",
"host": "mongo"
}
}'
COMPOSER_PROVIDERS='{
"github": {
"provider": "github",
"module": "passport-github",
"clientID": "a88810855b2bf5d62f97",
"clientSecret": "f63e3c3c65229dc51f1c8964b05e9717bf246279",
"authPath": "/auth/github",
"callbackURL": "/auth/github/callback",
"successRedirect": "/",
"failureRedirect": "/"
}
}'
Loaded the env variables by the following command.
root#ubuntu:~/examples/dir# source ennvars.txt
Started the docker container by the below command
root#ubuntu:~/examples/dir# docker run \
-d \
-e COMPOSER_CONNECTION_PROFILE=${COMPOSER_CONNECTION_PROFILE} \
-e COMPOSER_BUSINESS_NETWORK=${COMPOSER_BUSINESS_NETWORK} \
-e COMPOSER_ENROLLMENT_ID=${COMPOSER_ENROLLMENT_ID} \
-e COMPOSER_ENROLLMENT_SECRET=${COMPOSER_ENROLLMENT_SECRET} \
-e COMPOSER_NAMESPACES=${COMPOSER_NAMESPACES} \
-e COMPOSER_SECURITY=${COMPOSER_SECURITY} \
-e COMPOSER_CONFIG="${COMPOSER_CONFIG}" \
-e COMPOSER_DATASOURCES="${COMPOSER_DATASOURCES}" \
-e COMPOSER_PROVIDERS="${COMPOSER_PROVIDERS}" \
--name rest \
--network composer_default \
-p 3000:3000 \
myorg/my-composer-rest-server
942eb1bfdbaf5807b1fe2baa2608ab35691e9b6912fb0d3b5362531b8adbdd3a
It got executed successfully. So now I should be able to access the persistent and secured REST server by going to explorer page of loopback
But when tried to open the above url got the below error.
Error Image
Have I missed any step or done something wrong.
Two things:
You need to put export in front of the envvars in your envvars.txt file.
Check the version of Composer you are running. The FROM hyperledger/composer-rest-server command will pull the latest version of the rest server down, and if your composer version is not updated, the two will be incompatible.