I'm currently testing Rundeck for some PoCs I'm working on.
I currently only have it running on a single machine via docker, so I don't have access to multiple nodes.
I want to simulate the following job dependencies:
JobA is needed for JobB and JobC. But JobB and JobC must run in parallel.
JobC must run if the current local time is 9:00am. Doesn't matter if Job A is finished/started yet or not.
All this should be able to easily scale and expand up to several hundreds of Jobs.
Can some of you guys possibly help me?
I have tried several configurations including the Plug-in Job State. I somehow can't get this to work. The best I can do is run all the jobs either sequential or parallel.
The best (and easiest) way to achieve that is to use the ruleset strategy (only available on PagerDuty Process Automation On Prem, formerly "Rundeck Enterprise") calling your jobs using the job reference step.
So, on the community version, the "workaround" is to call the jobs from a Parent Job via Rundeck API, which basically is about scripting the custom workflow behavior, e.g:
JobA
- defaultTab: nodes
description: ''
executionEnabled: true
id: b156b3ed-fde6-4fdc-af81-1ee919edc4ff
loglevel: INFO
name: JobA
nodeFilterEditable: false
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- exec: whoami
keepgoing: false
strategy: node-first
uuid: b156b3ed-fde6-4fdc-af81-1ee919edc4ff
JobB
- defaultTab: nodes
description: ''
executionEnabled: true
id: 4c46c795-20c9-47a8-aa2e-e72f183b150f
loglevel: INFO
name: JobB
nodeFilterEditable: false
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- exec: whoami
keepgoing: false
strategy: node-first
uuid: 4c46c795-20c9-47a8-aa2e-e72f183b150f
JobC (runs every day at 9.00 AM, no matter the JobA execution).
- defaultTab: nodes
description: ''
executionEnabled: true
id: ce45c4d1-1350-407b-8001-2d90daf49eaa
loglevel: INFO
name: JobC
nodeFilterEditable: false
plugins:
ExecutionLifecycle: null
schedule:
month: '*'
time:
hour: '09'
minute: '00'
seconds: '0'
weekday:
day: '*'
year: '*'
scheduleEnabled: true
sequence:
commands:
- exec: whoami
keepgoing: false
strategy: node-first
uuid: ce45c4d1-1350-407b-8001-2d90daf49eaa
ParentJob (all the behavior logic is inside a bash script, needs jq tool to work)
- defaultTab: nodes
description: ''
executionEnabled: true
id: 6e67020c-7293-4674-8d56-5db811ae5745
loglevel: INFO
name: ParentJob
nodeFilterEditable: false
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- description: A friendly starting message
exec: echo "starting..."
- description: Execute JobA and Get the Status
fileExtension: .sh
interpreterArgsQuoted: false
plugins:
LogFilter:
- config:
invalidKeyPattern: \s|\$|\{|\}|\\
logData: 'true'
regex: ^(job_a_exec_id)\s*=\s*(.+)$
type: key-value-data
script: "# execute JobA\nexec_id=$(curl -s -X POST \\\n \"http://localhost:4440/api/41/job/b156b3ed-fde6-4fdc-af81-1ee919edc4ff/run\"\
\ \\\n --header \"Accept: application/json\" \\\n --header \"X-Rundeck-Auth-Token:\
\ 45hNfblPiF6C1l9CfJeG37l08oAgs0Gd\" | jq .id)\n \necho \"job_a_exec_id=$exec_id\""
scriptInterpreter: /bin/bash
- description: Time to execute the Job
exec: sleep 2
- description: 'Get the execution Status '
fileExtension: .sh
interpreterArgsQuoted: false
plugins:
LogFilter:
- config:
invalidKeyPattern: \s|\$|\{|\}|\\
logData: 'false'
regex: ^(job_a_exec_status)\s*=\s*(.+)$
type: key-value-data
script: "exec_status=$(curl -s -X GET \\\n 'http://localhost:4440/api/41/execution/#data.job_a_exec_id#'\
\ \\\n --header 'Accept: application/json' \\\n --header 'X-Rundeck-Auth-Token:\
\ 45hNfblPiF6C1l9CfJeG37l08oAgs0Gd' | jq -r .status)\n \necho \"job_a_exec_status=$exec_status\""
scriptInterpreter: /bin/bash
- description: 'Run JobB and JobC pararelly, depending on JobA Execution Status'
fileExtension: .sh
interpreterArgsQuoted: false
script: "# Now execute JobB and JobC depending on the JobA final status (dependency)\n\
\nif [ #data.job_a_exec_status# = \"succeeded\" ]; then\n echo \"Job A execition\
\ is OK, running JobB and JobC pararelly...\"\n \n # JobB \n curl -X POST\
\ \\\n \"http://localhost:4440/api/41/job/4c46c795-20c9-47a8-aa2e-e72f183b150f/run\"\
\ \\\n --header \"Accept: application/json\" \\\n --header \"X-Rundeck-Auth-Token:\
\ 45hNfblPiF6C1l9CfJeG37l08oAgs0Gd\" &\n \n # JobC\n curl -X POST \\\n\
\ \"http://localhost:4440/api/41/job/ce45c4d1-1350-407b-8001-2d90daf49eaa/run\"\
\ \\\n --header \"Accept: application/json\" \\\n --header \"X-Rundeck-Auth-Token:\
\ 45hNfblPiF6C1l9CfJeG37l08oAgs0Gd\" &\nelse\n echo \"Job A has failed, dependencey\
\ failed\"\nfi"
scriptInterpreter: /bin/bash
- description: A friendly ending message
exec: echo "done."
keepgoing: false
strategy: node-first
uuid: 6e67020c-7293-4674-8d56-5db811ae5745
Parent Job's JobB/JobC launching script:
# Now execute JobB and JobC depending on the JobA final status (dependency)
if [ #data.job_a_exec_status# = "succeeded" ]; then
echo "Job A execition is OK, running JobB and JobC pararelly..."
# JobB
curl -X POST \
"http://localhost:4440/api/41/job/4c46c795-20c9-47a8-aa2e-e72f183b150f/run" \
--header "Accept: application/json" \
--header "X-Rundeck-Auth-Token: 45hNfblPiF6C1l9CfJeG37l08oAgs0Gd" &
# JobC
curl -X POST \
"http://localhost:4440/api/41/job/ce45c4d1-1350-407b-8001-2d90daf49eaa/run" \
--header "Accept: application/json" \
--header "X-Rundeck-Auth-Token: 45hNfblPiF6C1l9CfJeG37l08oAgs0Gd" &
else
echo "Job A has failed, dependencey failed"
fi
Related
I'm attempting to pass an array of dynamically fetched data from one GitHub Action job to the actual job doing the build. This array will be used as part of a matrix to build for multiple versions. However, I'm encountering an issue when the bash variable storing the array is evaluated.
jobs:
setup:
runs-on: ubuntu-latest
outputs:
versions: ${{ steps.matrix.outputs.value }}
steps:
- id: matrix
run: |
sudo apt-get install -y jq && \
MAINNET=$(curl https://api.mainnet-beta.solana.com -X POST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0","id":1, "method":"getVersion"}' | jq '.result["solana-core"]') && \
TESTNET=$(curl https://api.testnet.solana.com -X POST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0","id":1, "method":"getVersion"}' | jq '.result["solana-core"]') && \
VERSIONS=($MAINNET $TESTNET) && \
echo "${VERSIONS[#]}" && \
VERSION_JSON=$(echo "${VERSIONS[#]}" | jq -s) && \
echo $VERSION_JSON && \
echo '::set-output name=value::$VERSION_JSON'
shell: bash
- id: debug
run: |
echo "Result: ${{ steps.matrix.outputs.value }}"
changes:
needs: setup
runs-on: ubuntu-latest
# Set job outputs to values from filter step
outputs:
core: ${{ steps.filter.outputs.core }}
package: ${{ steps.filter.outputs.package }}
strategy:
matrix:
TEST: [buy, cancel, create_auction_house, delegate, deposit, execute_sale, sell, update_auction_house, withdraw_from_fee, withdraw_from_treasury, withdraw]
SOLANA_VERSION: ${{fromJson(needs.setup.outputs.versions)}}
steps:
- uses: actions/checkout#v2
# For pull requests it's not necessary to checkout the code
- uses: dorny/paths-filter#v2
id: filter
with:
filters: |
core:
- 'core/**'
package:
- 'auction-house/**'
- name: debug
id: debug
working-directory: ./auction-house/program
run: echo ${{ needs.setup.outputs.versions }}
In the setup job above, the two versions are evaluated to a bash array (in VERSIONS) and converted into a JSON array to be passed to the next job (in VERSION_JSON). The last echo in the matrix step results in a print of [ "1.10.31", "1.11.1" ], but the debug step prints out this:
Run echo "Result: "$VERSION_JSON""
echo "Result: "$VERSION_JSON""
shell: /usr/bin/bash -e {0}
env:
CARGO_TERM_COLOR: always
RUST_TOOLCHAIN: stable
Result:
The changes job also results in an error:
Error when evaluating 'strategy' for job 'changes'.
.github/workflows/program-auction-house.yml (Line: 44, Col: 25): Unexpected type of value '$VERSION_JSON', expected type: Sequence.
It definitely seems like the $VERSION_JSON variable isn't actually being evaluated properly, but I can't figure out where the evaluation is going wrong.
For echo '::set-output name=value::$VERSION_JSON' you need to use double quotes or bash would not expand $VERSION_JSON.
set-output is not happy with multi-lined data. For your case, you can use jq -s -c so the output will be one line.
Below are two jobs in the build stage.
Default, there is set some common condition, and using extends keyword for that, ifawsdeploy.
As only one of them should run, if variable $ADMIN_SERVER_IP provided then connect_admin_server should run, working that way.
If no value provided to $ADMIN_SERVER_IP then create_admin_server should run, but it is not running.
.ifawsdeploy:
rules:
- if: '$TEST_CREATE_ADMIN && $REGION && $ROLE_ARN && $PACKAGEURL && $TEST_CREATE_ADMIN == "aws" && $SUB_PLATFORM == "aws" && $ROLE_ARN != "" && $PACKAGEURL != "" && $REGION != ""'
variables:
TEST_CREATE_ADMIN:
#value: aws
description: "Platform, currently aws only"
SUB_PLATFORM:
value: aws
description: "Platform, currently aws only"
REGION:
value: "us-west-2"
description: "region where to deploy company"
PACKAGEURL:
value: "http://somerpmurl.x86_64.rpm"
description: "company rpm file url"
ACCOUNT_NAME:
value: "testsubaccount"
description: "Account name of sub account to refer in the deployment, no need to match in AWS"
ROLE_ARN:
value: "arn:aws:iam::491483064167:role/uat"
description: "ROLE ARN of the user account assuming: aws sts get-caller-identity"
tfenv_version: "1.1.9"
DEV_PUB_KEY:
description: "Optional public key file to add access to admin server"
ADMIN_SERVER_IP:
description: "Existing Admin Server IP Address"
ADMIN_SERVER_SSH_KEY:
description: "Existing Admin Server SSH_KEY PEM content"
#export variables below will cause the terraform to use the root account instead of the one specified in tfvars file
.configure_aws_cli: &configure_aws_cli
- aws configure set region $REGION
- aws configure set aws_access_key_id $AWS_FULL_STS_ACCESS_KEY_ID
- aws configure set aws_secret_access_key $AWS_FULL_STS_ACCESS_KEY_SECRET
- aws sts get-caller-identity
- aws configure set source_profile default --profile $ACCOUNT_NAME
- aws configure set role_arn $ROLE_ARN --profile $ACCOUNT_NAME
- aws sts get-caller-identity --profile $ACCOUNT_NAME
- aws configure set region $REGION --profile $ACCOUNT_NAME
.copy_remote_log: ©_remote_log
- if [ -e outfile ]; then rm outfile; fi
- copy_command="$(cat $CI_PROJECT_DIR/scp_command.txt)"
- new_copy_command=${copy_command/"%s"/"outfile"}
- new_copy_command=${new_copy_command/"~"/"/home/ec2-user/outfile"}
- echo $new_copy_command
- new_copy_command=$(echo "$new_copy_command" | sed s'/\([^.]*\.[^ ]*\) \([^ ]*\) \(.*\)/\1 \3 \2/')
- echo $new_copy_command
- sleep 10
- eval $new_copy_command
.check_remote_log: &check_remote_log
- sleep 10
- grep Error outfile || true
- sleep 10
- returnCode=$(grep -c Error outfile) || true
- echo "Return code received $returnCode"
- if [ $returnCode -ge 1 ]; then exit 1; fi
- echo "No errors"
.prepare_ssh_key: &prepare_ssh_key
- echo $ADMIN_SERVER_SSH_KEY > $CI_PROJECT_DIR/ssh_key.pem
- cat ssh_key.pem
- sed -i -e 's/-----BEGIN RSA PRIVATE KEY-----/-bk-/g' ssh_key.pem
- sed -i -e 's/-----END RSA PRIVATE KEY-----/-ek-/g' ssh_key.pem
- perl -p -i -e 's/\s/\n/g' ssh_key.pem
- sed -i -e 's/-bk-/-----BEGIN RSA PRIVATE KEY-----/g' ssh_key.pem
- sed -i -e 's/-ek-/-----END RSA PRIVATE KEY-----/g' ssh_key.pem
- cat ssh_key.pem
- chmod 400 ssh_key.pem
connect-admin-server:
stage: build
allow_failure: true
image:
name: amazon/aws-cli:latest
entrypoint: [ "" ]
rules:
- if: '$ADMIN_SERVER_IP && $ADMIN_SERVER_IP != "" && $ADMIN_SERVER_SSH_KEY && $ADMIN_SERVER_SSH_KEY != ""'
extends:
- .ifawsdeploy
script:
- TF_IN_AUTOMATION=true
- yum update -y
- yum install git unzip gettext jq -y
- echo "Your admin server key and info are added as artifacts"
# Copy the important terraform outputs to files for artifacts to pass into other jobs
- *prepare_ssh_key
- echo "ssh -i ssh_key.pem ec2-user#${ADMIN_SERVER_IP}" > $CI_PROJECT_DIR/ssh_command.txt
- echo "scp -q -i ssh_key.pem %s ec2-user#${ADMIN_SERVER_IP}:~" > $CI_PROJECT_DIR/scp_command.txt
- test_pre_command="$(cat "$CI_PROJECT_DIR/ssh_command.txt") -o StrictHostKeyChecking=no"
- echo $test_pre_command
- test_command="$(echo $test_pre_command | sed -r 's/(ssh )(.*)/\1-tt \2/')"
- echo $test_command
- echo "sudo yum install -yq $PACKAGEURL 2>&1 | tee outfile ; exit 0" | $test_command
- *copy_remote_log
- echo "Now checking log file for returnCode"
- *check_remote_log
artifacts:
untracked: true
when: always
paths:
- "$CI_PROJECT_DIR/ssh_key.pem"
- "$CI_PROJECT_DIR/ssh_command.txt"
- "$CI_PROJECT_DIR/scp_command.txt"
after_script:
- cat $CI_PROJECT_DIR/ssh_key.pem
- cat $CI_PROJECT_DIR/ssh_command.txt
- cat $CI_PROJECT_DIR/scp_command.txt
create-admin-server:
stage: build
allow_failure: false
image:
name: amazon/aws-cli:latest
entrypoint: [ "" ]
rules:
- if: '$ADMIN_SERVER_IP != ""'
when: never
extends:
- .ifawsdeploy
script:
- echo "admin server $ADMIN_SERVER_IP"
- TF_IN_AUTOMATION=true
- yum update -y
- yum install git unzip gettext jq -y
- *configure_aws_cli
- aws sts get-caller-identity --profile $ACCOUNT_NAME #to check whether updated correctly or not
- git clone "https://project-n-setup:$(echo $PERSONAL_GITLAB_TOKEN)#gitlab.com/company-oss/project-n-setup.git"
# Install tfenv
- git clone https://github.com/tfutils/tfenv.git ~/.tfenv
- ln -s ~/.tfenv /root/.tfenv
- ln -s ~/.tfenv/bin/* /usr/local/bin
# Install terraform 1.1.9 through tfenv
- tfenv install $tfenv_version
- tfenv use $tfenv_version
# Copy the tfvars temp file to the terraform setup directory
- cp .gitlab/admin_server.temp_tfvars project-n-setup/$SUB_PLATFORM/
- cd project-n-setup/$SUB_PLATFORM/
- envsubst < admin_server.temp_tfvars > admin_server.tfvars
- rm -rf .terraform || exit 0
- cat ~/.aws/config
- terraform init -input=false
- terraform apply -var-file=admin_server.tfvars -input=false -auto-approve
- echo "Your admin server key and info are added as artifacts"
# Copy the important terraform outputs to files for artifacts to pass into other jobs
- terraform output -raw ssh_key > $CI_PROJECT_DIR/ssh_key.pem
- terraform output -raw ssh_command > $CI_PROJECT_DIR/ssh_command.txt
- terraform output -raw scp_command > $CI_PROJECT_DIR/scp_command.txt
- cp $CI_PROJECT_DIR/project-n-setup/$SUB_PLATFORM/terraform.tfstate $CI_PROJECT_DIR
- cp $CI_PROJECT_DIR/project-n-setup/$SUB_PLATFORM/admin_server.tfvars $CI_PROJECT_DIR
artifacts:
untracked: true
paths:
- "$CI_PROJECT_DIR/ssh_key.pem"
- "$CI_PROJECT_DIR/ssh_command.txt"
- "$CI_PROJECT_DIR/scp_command.txt"
- "$CI_PROJECT_DIR/terraform.tfstate"
- "$CI_PROJECT_DIR/admin_server.tfvars"
How to fix that?
I tried the below step from suggestions on comments section.
.generalgrabclustertrigger:
rules:
- if: '$TEST_CREATE_ADMIN && $REGION && $ROLE_ARN && $PACKAGEURL && $TEST_CREATE_ADMIN == "aws" && $SUB_PLATFORM == "aws" && $ROLE_ARN != "" && $PACKAGEURL != "" && $REGION != ""'
.ifteardownordestroy: # Automatic if triggered from gitlab api AND destroy variable is set
rules:
- !reference [.generalgrabclustertrigger, rules]
- if: 'CI_PIPELINE_SOURCE == "triggered"'
when: never
And included the above in extends of a job.
destroy-admin-server:
stage: cleanup
extends:
- .ifteardownordestroy
allow_failure: true
interruptible: false
But I am getting syntax error in the .ifteardownordestroy part.
jobs:destroy-admin-server:rules:rule if invalid expression syntax
You are overriding rules: in your job that extends .ifawsdeploy. rules: are not combined in this case -- the definition of rules: in the job takes complete precedence.
Take for example the following configuration:
.template:
rules:
- one
- two
myjob:
extends: .template
rules:
- a
- b
In the above example, the myjob job only has rules a and b in effect. Rules one and two are completely ignored because they are overridden in the job configuration.
Instead of uinsg extends:, you can use !reference to preserve and combine rules. You can also use YAML anchors if you want.
create-admin-server:
rules:
- !reference [.ifawsdeploy, rules]
- ... # your additional rules
If no value provided to $ADMIN_SERVER_IP then create_admin_server should run
Lastly, pay special attention to your rules:
rules:
- if: '$ADMIN_SERVER_IP != ""'
when: never
In this case, there are no rules that allow the job to run ever. You either need a case that will evaluate true for the job to run, or to have a default case (an item with no if: condition) in order for the job to run.
To get the behavior you expect, you probably want your default case to be on_success:
rules:
- if: '$ADMIN_SERVER_IP != ""'
when: never
- when: on_success
you can change your rules to :
rules:
- if: '$ADMIN_SERVER_IP != ""'
when: never
- when: always
or
rules:
- if: '$ADMIN_SERVER_IP == ""'
when: always
I have a sample in here: try-rules-stackoverflow-72545625 - GitLab and the pipeline record Pipeline no value - GitLab, Pipeline has value - GitLab
I am trying to implement a ci/cd pipeline using gitlab and i created a ci file with the following content.
stages:
- deploy
image:
name: "ubuntu:16.04"
first-pipeline:test:
stage: deploy
tags:
- executor:docker
only:
refs:
- branches
- schedules
script:
- export ANSIBLE_HOST_KEY_CHECKING=False
- echo "Job: $job_param"
- ansible-playbook -i production.ini -e "job_id=$job_param ansible_ssh_user=ubuntu" my-playbook.yml -l "10.37.23.230"
- apk add curl
- curl -X POST http://98.121.222.32:8080/api/v2/removejob -H 'Content-Type: application/json' -d "{"jobId": $job_param}"
- echo $query
- echo "Executed at= $now"
I keep running into the following error message : bad indentation of a mapping entry
24 | ...
25 | ... ubuntu" my-playbook.yml -l "10.37.23.230"
26 | ...
27 | ... application/json' -d "{"jobId": $job_param}"
-----------------------------------------^
28 | ...
29 | ...
Any suggestion on how to fix it? Any help will be appreciated. Thank you.
This worked :-
'curl -H "Content-Type: application/json" -X POST http://98.121.222.32:8080/api/v2/removejob -d "{"jobId": $job_param}"'
I am writing a Jenkins Pipeline job for setting up AWS infrastructure using API calls to our in-house AWS CLI wrapper library. Running the raw bash scripts on a CentOS box or as a Jenkins Freestyle job runs fine. However, it fails in the context of a Pipeline job. I think that the quotes may need to be different for the Pipeline job but I am not sure how.
After further investigation, I found that the curl command returns the wrong response from the service when running the scripts within a Jenkins Pipeline job.
pipeline {
agent any
stages {
stage('Checkout code from Git'){
steps {
echo "Checkout code from a GitHub repository"
// Checkout code from a GitHub repository
checkout([$class: 'GitSCM', branches: [[name: '*/master']], doGenerateSubmoduleConfigurations: false, extensions: [[$class: 'SubmoduleOption', disableSubmodules: false, parentCredentials: false, recursiveSubmodules: true, reference: '', trackingSubmodules: false]], submoduleCfg: [], userRemoteConfigs: [[credentialsId: 'xxxx', url: 'git#github.com:bbc/repo.git']]])
}
}
stage('Call our internal AWS CLI Wrapper System API to perform an ACTION on a specified ENVIRONMENT') {
steps {
script {
if("${params.ENVIRONMENT}" == 'int' && "${params.ACTION}" == 'create'){
echo "ENVIRONMENT=${params.ENVIRONMENT}, ACTION=${params.ACTION}"
echo ""
sh '''#!/bin/bash
# Create Neptune Cluster for the Int environment
cd blah-db
echo "Current working directory is $PWD"
CLOUD_FORMATION_FILE=$PWD/infrastructure/templates/neptune-cluster.json
echo "The CloudFormation file to operate on is $CLOUD_FORMATION_FILE"
echo "Running jq to transform the source CloudFormation file"
template=$(jq -M '.Parameters.Env.Default="int"' $CLOUD_FORMATION_FILE)
echo "Echoing the transformed CloudFormation file: \n$template"
echo "Running curl to make the http request to our internal AWS CLI Wrapper System"
curl -d "{\"aws_account\": \"1111111111\", \"region\": \"us-east-1\", \"name_suffix\": \"cluster\", \"template\": $template}" \
-H 'Content-Type: application/json' -H 'Accept: application/json' https://base.api.url/v1/services/blah-neptune/int/stacks \
--cert /path/to/client/certificate/client.crt --key /path/to/client/private-key/client.key
cd ..
pwd
# Set a timer to run for 300 seconds or 5 minutes to create a delay to allow for the Neptune Cluster to be fully provisioned first before adding instances to it.
'''
}
}
}
}
}
}
The actual result that I get from making the API call:
{"error": "Invalid JSON. Expecting property name: line 1 column 1 (char 1)"}
try change the curl as following:
curl -d '{"aws_account": "1111111111", "region": "us-east-1", "name_suffix": "cluster", "template": $template}'
Or assign the whole cmd to a variable and print it out to see it's as your wanted or not.
cmd = '''#!/bin/bash
cd blah-db
...
'''
echo cmd // compare the output string to the cmd of freestyle job.
sh cmd
I'm trying to build an elasticsearch image with preloaded data. I'm doing a restore operation from S3.
FROM elasticsearch:5.3.1
ARG bucket
ARG access_key
ARG secret_key
ARG repository
ARG snapshot
ENV ES_JAVA_OPTS="-Des.path.conf=/etc/elasticsearch"
RUN elasticsearch-plugin install repository-s3
ADD https://raw.githubusercontent.com/vishnubob/wait-for-it/e1f115e4ca285c3c24e847c4dd4be955e0ed51c2/wait-for-it.sh wait-for-it.sh
RUN chmod +x wait-for-it.sh
RUN /docker-entrypoint.sh elasticsearch -p /tmp/epid & ./wait-for-it.sh -t 0 localhost:9200 -- echo "Elasticsearch is ready!" && \
curl -H 'Content-Type: application/json' -X PUT "localhost:9200/_snapshot/$repository" -d '{ "type": "s3", "settings": { "bucket": "'$bucket'", "access_key": "'$access_key'", "secret_key": "'$secret_key'" } }' && \
curl -H "Content-Type: application/json" -X POST "localhost:9200/_snapshot/$repository/$snapshot/_restore?wait_for_completion=true" -d '{ "indices": "myindex", "ignore_unavailable": true, "index_settings": { "index.number_of_replicas": 0 }, "ignore_index_settings": [ "index.refresh_interval" ] }' && \
curl -H "Content-Type: application/json" -X GET "localhost:9200/_cat/indices"
RUN kill $(cat /tmp/epid) && wait $(cat /tmp/epid); exit 0;
CMD ["-E", "network.host=0.0.0.0", "-E", "discovery.zen.minimum_master_nodes=1"]
The image is built successfully, but when I start the container the index is lost. I'm not using any volumes. What am I missing?
version: '2'
services:
elasticsearch:
container_name: "elasticsearch"
build:
context: ./elasticsearch/
args:
access_key: access_key_here
secret_key: secret_key_here
bucket: bucket_here
repository: repository_here
snapshot: snapshot_here
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xms1g -Xmx1g -Des.path.conf=/etc/elasticsearch"
It seems that volumes cannot be burnt in images. The directory that holds the data generated are specified as a volume by the parent image. The only way to do this is to fork the parent Dockerfile and remove the volume part.