Multiple process execution and Ordering issues in docker-compose - bash

I am trying to launch a zookeeper server and a bootstrap process for my API and another process which has to consume data written by my API to the zookeeper server using docker-compose.
Once I execute docker-compose up, my zookeeper server launches successfully and my bootstrap API is able to connect to it and is able to write data successfully.
The constraint here is that my 2nd process needs to wait for my API to write the data to the zookeeper otherwise it results in an exception since no node would have been created by the API until then. Thus, in the command section of my docker-compose.ml file, I executed a bash command and made my bootstrap API run infinitely by adding a while loop so that the program doesn't exit and also added a sleep statement in my second process so that it waits until the API has done its job. (sort of race condition handling).
From what I understood, docker-compose handles the ordering using "links" in the docker-compose.yml file but doesn't handle the state of individual processes which need to be launched. By state I mean, that the 2nd process needs to start once 1st process has exited successfully.
Here's my docker-compose.yml file -
zookeeper:
image: xyz.com/temp
ports:
- "10000:2181"
bootstrapapi:
image: xyz.com/temp1
command: /bin/bash -c "cd /code; make test_data ZK_HOSTS=zookeeper:2181 CLUSTER=cluster ; while [ true ]; do sleep 5; done"
volumes:
- .:/test
links:
- zookeeper
xyz:
image: def.com/temp2
command: /bin/bash -c "sleep 10;python -m test --zk-hosts=zookeeper:2181 --zk-cluster-path=cluster "
links:
- zookeeper
If you need any more details, I'll be glad to let you know. Thanks in advance.

The way I solved my problem was to use shared volumes between containers, where the first process creates a new file on this volume after it has done it's job and the other process runs a loop to check if this file has been created. Once the 2nd process detects that, it starts its job of consuming that data.
That was a simple way of solving this producer-consumer race condition scenario in docker-compose using bash scripting in docker-compose. Hope this can helpful.

Related

Make a script that starts and shutdown both redis and sidekiq

I'm fairly new to Bash, redis and linux in general and I'm having trouble with creating a script. This is also my first question, I hope it is not a duplicate.
So here's the problem, I'm creating a simple application in ruby for educational purposes, but the feature I'm trying to implement uses redis and sidekiq. What I want to do is to create an executable script (I named it server) that initiates the redis server, initiates the redis, but it should also shutdown redis after the user finalizes the sidekiq.
This is what I came up with:
#!/usr/bin/env sh
set -e
redis-server --daemonize yes
bundle exec sidekiq -r ./a/sample/path/worker.rb
redis-cli shutdown # this is not working, I want to execute this after shutting sidekiq down...
When I run the fourth line, it starts the little Sidekiq "welcome page" and I can't to anything until I shut it down with Control + C. I assumed that after shutting it with this command, it would continue with the script I wrote, which would be the redis-cli shutdown command.
But it does not. When I Control + C the sidekiq, it simply goes back to the command line.
Is there anyone familiar with these concepts that could help me? I wanted a script that would also shutdown redis after I'm done with sidekiq.
Thanks!
Have you considered using Foreman?
http://blog.daviddollar.org/2011/05/06/introducing-foreman.html
https://github.com/ddollar/foreman

How to perform healthcheck on a cassandra DB dockerfile

So I need to create a Docker Image of cassandra that have keyspace. So i made this docker file
FROM cassandra
WORKDIR /usr/app
COPY script.cql ./script.cql
COPY entrypoint-wrap.sh ./entrypoint-wrap.sh
CMD ["cassandra", "-f"]
RUN bash entrypoint-wrap.sh
and script.cql contains
CREATE KEYSPACE project WITH replication = {'class':'SimpleStrategy', 'replication_factor' : 3};
and i want to run this script
#!/bin/bash
echo "checkcqlsh"
program=no
while [ $program = no ]
do
if ! [ -x "$(command -v cqlsh)" ]; then
echo 'Error: cqlsh is not included.'
program=no
else
program=yes
cqlsh -f script.cql
fi
done
But when it check if condition the image building stops because of an error that DB didn't have started yet. How to check if the cassandra DB is up and running?
Having CMD and then RUN will not work as they are executed at different lifecycles.
One ways is to have a custom entrypoint, that will start Cassandra in the background, wait for it's getting online, apply the schema change, and wait indefinitely, or until SIGTERM (waiting for a signal is better, as it allows to shutdown Cassandra clearly).
Another way - pre-initialize the container during build time, something like this:
Start Cassandra in background with RUN
Wait for it's started, and apply schema changes
Terminate Cassandra
then, just normal cassandra -f will work as CMD command.
Example of such thing is available in the DataStax's JIRA for Java driver, and has link to a gist that have Dockerfile + scripts.

Can't terminate node(js) process without terminating ssh server in docker container

I'm using a Dockerfile that ends with a CMD ["/start.sh"]:
#!/bin/bash
service ssh start
/usr/bin/node /myApp/app.js
if for some reason i need to kill the node process, the ssh server is being closed as well (forces me to reboot the container to reconnect).
Any simple way to avoid this behavior?
Thank You.
The container exits as soon as main process of the container exits. In your case, the main process inside the container is start.sh shell script. The start.sh shell script is starting the ssh service and then running the nodejs process as child process. Once the nodejs process dies, the shell script exits as well and so the container exits. So what you can do is to put the nodejs process in background.
#!/bin/bash
service ssh start
/usr/bin/node /myApp/app.js &
# Need the following infinite loop as the shell script should not exit
while do:
sleep 2
done
I DO NOT recommend this approach though. You should have only a single process per container. Read the following answers to understand why -
Running multiple applications in one docker container
If you still want to run multiple processes inside container, there are better ways to do it like using supervisord - https://docs.docker.com/config/containers/multi-service_container/

script not working when invoked via cron

I have created a script script.sh which contains
./ecc start pricingUpdater & >> /home/eceuser/Muthu/details/Latest.txt
where ecc is another script.
If I run the script manually by simply invoking ./script.sh I am able to start the utility:
Starting Oracle Communication Elastic Charging Controller 11.2.0.1 ...
-- Node 'pricingUpdater' started with PID 10705
^[[1m===>^[[m [{GridEventImpl
status: true
node: PricingUPdater node pricingUpdater on Host 10.180.85.16
details: [pid:10705, state:running]
}]
but if I try to run the same script via crontab I get:
Starting Oracle Communication Elastic Charging Controller 11.2.0.1 ...
so the utility is not started.
It seems that you are running the cron jobs with user which doesn't have permission to edit or create /home/eceuser/Muthu/details/Latest.txt file

Running node.js tests against test data in a MongoDB database

I'm trying to set up a script to execute tests for my node.js program, which uses MongoDB. The idea is that I want to have a script I can run that:
Starts the MongoDB process, forked as a daemon
Pre populates the database with some test data
Starts my node server with forever, so it runs as a daemon
Run my tests
Drop the test data from the database
I have a crude script that performs all these steps. My problem is that MongoDB takes a variable amount of time to set up, which results in sleep calls in my script. Consequently it only works occasionally.
# the directory of this script
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
# launch mongodb.
$DIR/../../db/mongod --fork --logpath ./logs/mongodb.log --logappend --dbpath ./testdb/ --quiet
# takes a bit of time for the database to get set up,
# and since we make it a daemon process we cant run the tests immediately
sleep 1
# avoid EADDRINUSE errors because existing node servers are up.
killall node &> /dev/null
# start up our node server using a test database.
forever start $DIR/../main.js --dbname=testdb --logpath=test/logs/testlog.log
# takes a bit of time for node to get set up,
# and since we make it a daemon process we cant run the tests immediately
sleep 1
# run any database setup code (inject data for testing)
$DIR/../../db/mongo 127.0.0.1:27017/testdb $DIR/setup.js --quiet
# actually run the tests
node $DIR/tests.js
# kill the servers (this could be a little less heavy handed...)
killall node &> /dev/null
killall forever &> /dev/null
# finally tear down the database (drop anything we've added to the test db)
$DIR/../../db/mongo 127.0.0.1:27017/testdb $DIR/teardown.js --quiet
# and then shut mogodb down
kill -2 `ps ax | grep mongod | grep -v grep | awk '{print $1}'`
What is the best way to go about what I'm trying to do? Am I going down a rabbit hole here, or am I missing something in the MongoDB docs?
Ask yourself what your purpose of your testing is: Is it to test the actual DB connection in your code, or focus on whether your code handles and processes data from the DB correctly?
Assuming your code is in Javascript, if you strictly want to test that your code logic is handling data correctly, and are using a MongoDB wrapper object class (i.e. Mongoose), one thing you may be interested to add to your workflow is the creation and running of spec tests using the Jasmine test suite.
This would involve writing test-code, mocking-up test data as javascript objects. Yes, that means any actual data from the DB itself will not be involved in your spec tests. Since after all, your primary purpose is to test if your code is logically working right? It's your project, only you know the answer :)
If your main problem is how to find out when mongod actually starts why don't you write a script which will tell you that ?
For example you can write a until loop and check if the client can connect properly on the mongo server, based on the return value. For example instead of using sleep 1 use something like that:
isMongoRunning=-1
until [[ ${isMongoRunning} -eq 0 ]]; do
sleep 1
$DIR/../../db/mongo 127.0.0.1:27017/testdb $DIR/empty.js --quiet
isMongoRunning=${?}
done
This loop will end only after mongodb start.
Also of you would like to improve the stopping of mongodb add --pidfilepath to your mongod execution line so you can easily find which process to terminate.

Resources