How can I get Elasticsearch running on CoreOS? - elasticsearch

I've a CoreOS cluster with three servers (on Digital Ocean), at this moment running MongoDB. Now I want to start Elasticsearch on this cluster with 1 replica (not using the Mongo river).
I followed the description as outlined here.
Resulting in two services, elasticsearch#.service & elasticsearch-discovery#.service.
elasticsearch#.service
[Unit]
Description=ElasticSearch service
After=etcd.service
After=docker.service
Before=elasticsearch-discovery#%i.service
Requires=elasticsearch-discovery#%i.service
[Service]
KillMode=none
TimeoutStartSec=0
TimeoutStopSec=360
EnvironmentFile=/etc/environment
ExecStartPre=-/usr/bin/docker kill %p-%i
ExecStartPre=-/usr/bin/docker rm %p-%i
ExecStartPre=-/usr/bin/bash -c "echo PreKill and rm done;"
ExecStartPre=/usr/bin/mkdir -p /data/elasticsearch
ExecStartPre=/usr/bin/docker pull dockerfile/elasticsearch
ExecStartPre=/usr/bin/bash -c "echo mkdir and docker pull done;"
ExecStart=/bin/bash -c "\
echo StartingUp; \
curl -f ${COREOS_PUBLIC_IPV4}:4001/v2/keys/services/elasticsearch; \
if [ $? = 0 ]; then \
UNICAST_HOSTS = $(etcdctl ls --recursive /services/elasticsearch | sed 's/\/services\/elasticsearch\///g' | sed 's/$/:9300/' | paste -s -d ','); \
echo Key found; \
else \
UNICAST_HOSTS=''; \
echo No Key found; \
fi;"
ExecStartPost=/bin/bash -c "\
echo Starting Docker; \
/usr/bin/docker run \
--name %p-%i \
--publish 9200:9200 \
--publish 9300:9300 \
--volume /data/elasticsearch:/data \
dockerfile/elasticsearch \
/elasticsearch/bin/elasticsearch \
--node.name=%p-%i \
--cluster.name=nvssearch \
--network.publish_host=${COREOS_PUBLIC_IPV4} \
--discovery.zen.ping.multicast.enabled=false \
--discovery.zen.ping.unicast.hosts=$UNICAST_HOSTS;"
ExecStop=/bin/bash/ -c "/usr/bin/docker kill %p-%i"
Restart=on-failure
[X-Fleet]
X-Conflicts=%p#*.service
elasticsearch-discovery#.service
[Unit]
Description=ElasticSearch discovery service
BindsTo=elasticsearch#%i.service
After=elasticsearch#%i.service
[Service]
EnvironmentFile=/etc/environment
ExecStart=/bin/bash -c '\
while true; do \
curl -f ${COREOS_PUBLIC_IPV4}:9200; \
if [ "$?" = "0" ]; then \
etcdctl set /services/elasticsearch/${COREOS_PUBLIC_IPV4} \ '{"http_port": 9200, "transport_port": 9300}\' --ttl 60; \
else \
etcdctl rm /services/elasticsearch/${COREOS_PUBLIC_IPV4}; \
fi; \
sleep 45; \
done'
ExecStop=/usr/bin.etcdctl rm /services/elasticsearch/${COREOS_PUBLIC_IPV4}
[X-Fleet]
X-ConditionMachineOf=elasticsearch#%i.service
But if I try to run the service (fleetctl submit / load / start elasticsearch#1.service), it immediately dies:
elasticsearch#1.service 475f6273.../IP inactive dead
Running fleetctl journal elasticsearch#1 results in the following message:
Mar 17 09:17:04 nvs-1 systemd[1]: Stopped ElasticSearch service.
That's all, no echoes I've added (to the service) are shown, or whatsoever. Anyone any ideas on how to get me further?

It is somewhat simpler to get Elasticsearch running with Weave. My blog post shows how to run it on Vagrant, however moving the same setup to cloud should be pretty straight-forward.

I've followed the same article you did and ran into the same issue. I tracked it down to the docker container not being available anymore, so I change the docker run command in the elasticsearch#service unit:
[Unit]
Description=ElasticSearch service
After=docker.service
Requires=docker.service
[Service]
TimeoutSec=180
EnvironmentFile=/etc/environment
ExecStartPre=/usr/bin/mkdir -p /data/elasticsearch
ExecStartPre=/usr/bin/docker pull elasticsearch
ExecStart=/bin/bash -c '\
curl -f ${COREOS_PRIVATE_IPV4}:4001/v2/keys/services/elasticsearch; \
if [ "$?" = "0" ]; then \
UNICAST_HOSTS=$(etcdctl ls --recursive /services/elasticsearch \
| sed "s/\/services\/elasticsearch\///g" \
| sed "s/$/:9300/" \
| paste -s -d","); \
else \
UNICAST_HOSTS=""; \
fi; \
/usr/bin/docker run \
--rm \
--name %p-%i \
--publish 9200:9200 \
--publish 9300:9300 \
--volume /data/elasticsearch:/data \
elasticsearch \
--node.name=%p-%i \
--cluster.name=logstash \
--network.publish_host=${COREOS_PRIVATE_IPV4} \
--discovery.zen.ping.multicast.enabled=false \
--discovery.zen.ping.unicast.hosts=$UNICAST_HOSTS'
ExecStop=/usr/bin/docker stop %p-%i
ExecStop=/usr/bin/docker rm %p-%i
[X-Fleet]
X-Conflicts=%p#*.service
Apart from that I think, on your original post, you're missing some quotes, double quotes and have some wrong blank spaces.
This is my current elastic-discovery#service for reference:
[Unit]
Description=ElasticSearch discovery service
BindsTo=elasticsearch#%i.service
[Service]
EnvironmentFile=/etc/environment
ExecStart=/bin/bash -c '\
while true; do \
curl -f ${COREOS_PRIVATE_IPV4}:9200; \
if [ "$?" = "0" ]; then \
etcdctl set /services/elasticsearch/${COREOS_PRIVATE_IPV4} \'{"http_port": 9200, "transport_port": 9300}\' --ttl 60; \
else \
etcdctl rm /services/elasticsearch/${COREOS_PRIVATE_IPV4}; \
fi; \
sleep 45; \
done'
ExecStop=/usr/bin/etcdctl rm /services/elasticsearch/${COREOS_PRIVATE_IPV4}
[X-Fleet]
X-ConditionMachineOf=elasticsearch#%i.service

Related

Addition of two variables in slurm script

I am having slurm scirpt processing fmri data and the maximum value I can give in an array is 999, but the name of my subject ist over 1000.
So I need to to an addition in my slurm script. I tried:
a=${SLURM_ARRAY_TASK_ID} sum=$(($a + 1200))
#!/bin/sh
#
#SBATCH --job-name psy-stephan_fmriprep_gsp
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=4
#SBATCH --mem=8GB
#SBATCH --output /projects/core-psy/logs/nako/stephan/slurm-%j.log
#SBATCH --error /projects/core-psy/logs/nako/stephan/slurm-%j.err
a=${SLURM_ARRAY_TASK_ID}
sum=$(($a + 1200))
singularity run \
--home /projects/core-psy/tmp3:/home/fmriprep \
--cleanenv \
-B /projects/core-psy/data/nako//swunderl/GSP_new/:/input \
-B /projects/core-psy/data/nako/swunderl/GSP_new/derivatives:/output \
-B //projects/core-psy/data/nako/swunderl/GSP_new_workdir/:/workdir \
-B /projects/core-psy/data/nako/swunderl/license.txt:/license \
/projects/core-psy/images/fmriprep-20.2.2.simg /input/sub-$sum /output participant \
--fs-license-file /license \
--skip-bids-validation \
--use-aroma \
--fs-no-reconall \
-w /workdir/ \
#--output-layout bids \
# sbatch --account=core-psy sbatch-multiple-job.slurm
So i can pass as a command SLURM_ARRAY_TASK_ID as 1.
But the addition keeps giving me sub-0+1200 and not the actual sum of both numbers.
Since you do not need to perform math on a, you can perform variable expansion on the string to make a 4 digit subject label for your fmriprep command:
sum="1${SLURM_ARRAY_TASK_ID}"
This way sbatch -a 200 ./your_job_script.sh will run for sub-1200. If you have labels like 1001, you will need to add to the variable expansion since 001 becomes a SLURM_ARRAY_TASK_ID of 1.
Here's an example adapted from my own - albeit not the most succint - code for sbatch scripts
if [ ${#SLURM_ARRAY_TASK_ID} == 1 ];
then
inputNo="100${SLURM_ARRAY_TASK_ID}"
singularity run \
--home /projects/core-psy/tmp3:/home/fmriprep \
--cleanenv \
-B /projects/core-psy/data/nako//swunderl/GSP_new/:/input \
-B /projects/core-psy/data/nako/swunderl/GSP_new/derivatives:/output \
-B //projects/core-psy/data/nako/swunderl/GSP_new_workdir/:/workdir \
-B /projects/core-psy/data/nako/swunderl/license.txt:/license \
/projects/core-psy/images/fmriprep-20.2.2.simg /input/sub-${inputNo} /output participant \
--fs-license-file /license \
--skip-bids-validation \
--use-aroma \
--fs-no-reconall \
-w /workdir/
elif [ ${#SLURM_ARRAY_TASK_ID} == 2 ];
then
inputNo="10${SLURM_ARRAY_TASK_ID}"
singularity run \
--home /projects/core-psy/tmp3:/home/fmriprep \
--cleanenv \
-B /projects/core-psy/data/nako//swunderl/GSP_new/:/input \
-B /projects/core-psy/data/nako/swunderl/GSP_new/derivatives:/output \
-B //projects/core-psy/data/nako/swunderl/GSP_new_workdir/:/workdir \
-B /projects/core-psy/data/nako/swunderl/license.txt:/license \
/projects/core-psy/images/fmriprep-20.2.2.simg /input/sub-${inputNo} /output participant \
--fs-license-file /license \
--skip-bids-validation \
--use-aroma \
--fs-no-reconall \
-w /workdir/
elif [ ${#SLURM_ARRAY_TASK_ID} == 3 ];
then
inputNo="1${SLURM_ARRAY_TASK_ID}"
singularity run \
--home /projects/core-psy/tmp3:/home/fmriprep \
--cleanenv \
-B /projects/core-psy/data/nako//swunderl/GSP_new/:/input \
-B /projects/core-psy/data/nako/swunderl/GSP_new/derivatives:/output \
-B //projects/core-psy/data/nako/swunderl/GSP_new_workdir/:/workdir \
-B /projects/core-psy/data/nako/swunderl/license.txt:/license \
/projects/core-psy/images/fmriprep-20.2.2.simg /input/sub-${inputNo} /output participant \
--fs-license-file /license \
--skip-bids-validation \
--use-aroma \
--fs-no-reconall \
-w /workdir/
fi

Bash scripts return a substring not whole value in column for psql

I want to get the value of message in db, which for example is "the error is missing index " but when I run the code belong, it only return me 'the' for error_type. How could this happen that only return me the first word but not the whole value of the query result? How can I get the query results with the whole value?
declare -a ROW=($(psql \
-X \
-U $DB_USER \
-h $DB_HOST \
-d $DB_NAME \
-p $DB_PORT \
--single-transaction \
--set AUTOCOMMIT=off \
--set ON_ERROR_STOP=on \
--no-align \
-t \
--field-separator ' ' \
--quiet \
-c "SELECT message
FROM error_message
WHERE created_at > '$t1' and created_at < '$t'")
)
error_type=${ROW[0]}
echo $error_type
Don't use an array, use an ordinary string variable to contain the whole result.
error_type=$(psql \
-X \
-U $DB_USER \
-h $DB_HOST \
-d $DB_NAME \
-p $DB_PORT \
--single-transaction \
--set AUTOCOMMIT=off \
--set ON_ERROR_STOP=on \
--no-align \
-t \
--field-separator ' ' \
--quiet \
-c "SELECT message
FROM error_message
WHERE created_at > '$t1' and created_at < '$t'")
echo "$error_type"
Remember to quote variables unless you need word splitting and wildcard expansion to be done.

How to rewrite grep in #!/bin/sh?

I have the following shell script, that does not compile:
#!/bin/sh
.....
.....
clean_up() {
echo "Clean up"
docker stop $KC_TEST_SVC
docker stop $KC_NAME
docker stop $POSTGRES_NAME
docker network rm $KC_NETWORK
}
# Check network if exists, if not then create
docker network inspect $KC_NETWORK
if [ $? -eq 1 ]; then
docker network create $KC_NETWORK
if [ $? -eq 1 ]; then
exit 1
fi
fi
docker run -d --rm --name $POSTGRES_NAME \
-e POSTGRES_DB=$POSTGRES_DB \
-e POSTGRES_USER=$POSTGRES_USER \
-e POSTGRES_PASSWORD=$POSTGRES_PW \
--network=$KC_NETWORK \
postgres:12.3
if [ $? -eq 1 ]; then
exit 1
fi
docker build --build-arg STAGE=int --tag $KC_TAG .
if [ $? -eq 1 ]; then
exit 1
fi
docker run -d --rm --name $KC_NAME \
-e DB_VENDOR=POSTGRES \
-e DB_ADDR=$POSTGRES_NAME \
-e DB_DATABASE=$POSTGRES_DB \
-e DB_USER=$POSTGRES_USER \
-e DB_PASSWORD=$POSTGRES_PW \
-e KEYCLOAK_USER=$KC_USER \
-e KEYCLOAK_PASSWORD=$KC_PW \
-e KEYCLOAK_LOGLEVEL=DEBUG \
--network=$KC_NETWORK \
$KC_TAG "-Dkeycloak.migration.action=import -Dkeycloak.migration.provider=dir -Dkeycloak.migration.dir=/opt/jboss/keycloak/import-dir -Dkeycloak.migration.strategy=OVERWRITE_EXISTING"
if [ $? -eq 1 ]; then
exit 1
fi
# Wait until Keycloak get started
sleep 30
# Run test jetty service that is protected with Keycloak
docker run -d --rm --name $KC_TEST_SVC \
-v $(pwd)/app:/app \
--network $KC_NETWORK \
hub.databaker.io/devops/jetty-keycloak:0.1.6
if [ $? -eq 1 ]; then
exit 1
fi
# Request Tokens for credentials
KC_URL=http://$KC_SERVER/$KC_CONTEXT/realms/$KC_REALM/protocol/openid-connect/token
echo "Keycloak URL => $KC_URL"
echo "Test the client connection to Keycloak with user $KC_TEST_USER"
KC_RESPONSE=$(
docker run --rm --network=$KC_NETWORK curlimages/curl:7.71.1 -X POST \
-d "username=$KC_TEST_USER" \
-d "password=$KC_TEST_PW" \
-d "grant_type=password" \
-d "client_id=$KC_TEST_CLIENT" \
$KC_URL | docker run --rm -i stedolan/jq .
)
echo "Response from Keycloak $KC_RESPONSE"
if grep -q "error" <<< "$KC_RESPONSE"; then
echo "++++++++Error+++++++++"
exit 1
fi
KC_ACCESS_TOKEN=$(echo "$KC_RESPONSE" | docker run --rm -i stedolan/jq -r .access_token)
echo "Access token from KC => $KC_ACCESS_TOKEN"
echo "Make request to protected service"
SVC_URL=http://$KC_TEST_SVC:8080/api/health
SVC_RES=$(docker run --rm --network=$KC_NETWORK curlimages/curl:7.71.1 -v -k \
-H "Authorization: Bearer $KC_ACCESS_TOKEN" \
-H "Accept: application/json" \
$SVC_URL | docker run --rm -i stedolan/jq .status)
clean_up
echo "$SVC_RES"
if [ "$SVC_RES" != "I am healthy" ]; then
echo "Test failed."
exit 1
else
echo "Test was successful. The service response $SVC_RES."
fi
it complains:
Syntax error: redirection unexpected
due to of the following line:
if grep -q "error" <<< "$KC_RESPONSE"; then
echo "++++++++Error+++++++++"
exit 1
fi
#!/bin/sh does not support grep. How to rewrite it?
<<< is a here string and it's a bash extension. In posix shell, just pipe the data.
if printf "%s\n" "$KC_RESPONSE" | grep -q "error"; then
Do not do if [ $? -eq 1 ]; then, it's error prone. Do if ! command; then. (You might want to research set -eu).
Change #!/bin/sh to #!/bin/bash (ev. replace bash location, which bash helps to find location of bash, and to see if bash is installed), or you may need to rewrite your script in a portable way. In this case, create a temporary file, put the string in such file, redirect such file to grep (with single <), and then delete the temporary file.
With such long description, you see why there is an extension in bash, on the other hand, it is nearly a syntactic sugar: you can easily do the same.

Submit multiple json payloads with curl

I am working with the confluent kafka, zookeeper in docker. I successfully submit a json file to kafka topic then consume as follow
curl -X POST \
-H "Content-Type: application/json" \
--data '{"name": "quickstart-file-source", "config {"connector.class":"org.apache.kafka.connect.file.FileStreamSourceConnector", "tasks.max":"1", "topic":"quickstart-data", "file": "/tmp/quickstart/input.json"}}' \
http://localhost:28081/connectors
Above curl command has only one json file which executes successfully but I need to post multiple json files. Is there any way to do it?
Here is my kafka connect
docker run -d \
--name=kafka-connect-avro \
--net=host \
-e CONNECT_BOOTSTRAP_SERVERS=localhost:29091 \
-e CONNECT_REST_PORT=28081 \
-e CONNECT_GROUP_ID="quickstart-avro" \
-e CONNECT_CONFIG_STORAGE_TOPIC="quickstart-avro-config" \
-e CONNECT_OFFSET_STORAGE_TOPIC="quickstart-avro-offsets" \
-e CONNECT_STATUS_STORAGE_TOPIC="quickstart-avro-status" \
-e CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR=1 \
-e CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR=1 \
-e CONNECT_STATUS_STORAGE_REPLICATION_FACTOR=1 \
-e CONNECT_KEY_CONVERTER="org.apache.kafka.connect.json.JsonConverter" \
-e CONNECT_VALUE_CONVERTER="org.apache.kafka.connect.json.JsonConverter" \
-e CONNECT_INTERNAL_KEY_CONVERTER="org.apache.kafka.connect.json.JsonConverter" \
-e CONNECT_INTERNAL_VALUE_CONVERTER="org.apache.kafka.connect.json.JsonConverter" \
-e CONNECT_REST_ADVERTISED_HOST_NAME="localhost" \
-e CONNECT_LOG4J_ROOT_LOGLEVEL=DEBUG \
-v /tmp/quickstart/file:/tmp/quickstart \
confluentinc/cp-kafka-connect:latest
Reference link
You could make individual JSON files in the current directory and post them separately in a loop
e.g.
$ ls *.json # list your connectors
payload1.json
payload2.json
And then loop over them
for f in `ls *.json`; do
curl -X POST -H "Content-Type: application/json" \
--data#${f} http://localhost:28081/connectors
done
Or simpler to use cat

mailgun e-mail attachment cronjob

I have a bash script with the below content, It works fine when i run it directly, If i add this to cron job it fails in attachment.
#!/bin/bash
echo "Clearing the files"
rm -f ci/assets/*.csv
sleep 10
echo "Shot report..."
node ci/export-shot.js
echo "Not shot report..."
node ci/export-not-shot.js
echo "Retouching delivered report..."
node ci/export-retouching-delivered.js
echo "Sending email"
curl -s --user 'api:key-6gdgdfg852fgggy4893g-t5cjgfseerwefgfdgdf2183' \
https://api.mailgun.net/v3/test.co/messages \
-F from='Openly Report Bot <report-bot#test.co>' \
-F to='kirthan.b#test.co' \
-F subject="`date -v-1d +%F` : Shot/Not Shot/Delivered Articles" \
-F text='Daily report of shot, not shot and delivered articles' \
--form-string html='<html>HTML version of the body</html>' \
-F attachment=#ci/assets/shot.csv \
-F attachment=#ci/assets/not-shot.csv \
-F attachment=#ci/assets/delivered.csv \
Please help me on this.

Resources