So I need to create a Docker Image of cassandra that have keyspace. So i made this docker file
FROM cassandra
WORKDIR /usr/app
COPY script.cql ./script.cql
COPY entrypoint-wrap.sh ./entrypoint-wrap.sh
CMD ["cassandra", "-f"]
RUN bash entrypoint-wrap.sh
and script.cql contains
CREATE KEYSPACE project WITH replication = {'class':'SimpleStrategy', 'replication_factor' : 3};
and i want to run this script
#!/bin/bash
echo "checkcqlsh"
program=no
while [ $program = no ]
do
if ! [ -x "$(command -v cqlsh)" ]; then
echo 'Error: cqlsh is not included.'
program=no
else
program=yes
cqlsh -f script.cql
fi
done
But when it check if condition the image building stops because of an error that DB didn't have started yet. How to check if the cassandra DB is up and running?
Having CMD and then RUN will not work as they are executed at different lifecycles.
One ways is to have a custom entrypoint, that will start Cassandra in the background, wait for it's getting online, apply the schema change, and wait indefinitely, or until SIGTERM (waiting for a signal is better, as it allows to shutdown Cassandra clearly).
Another way - pre-initialize the container during build time, something like this:
Start Cassandra in background with RUN
Wait for it's started, and apply schema changes
Terminate Cassandra
then, just normal cassandra -f will work as CMD command.
Example of such thing is available in the DataStax's JIRA for Java driver, and has link to a gist that have Dockerfile + scripts.
Related
I want a custom bash script in the container that is called automatically before the container stops (docker stop or ctrl + c).
According to this docker doc and multiple StackOverflow threads, I need to catch the SIGTERM signal in the container and then run my custom script when the event appears. As I know SIGTERM can be only used from a root process with PID 1.
Relevand part of my Dockerfile:
...
COPY container-scripts/entrypoint.sh /
ENTRYPOINT ["/entrypoint.sh"]
I use [] to define the entrypoint and as I know this will run my script directly, without having a /bin/sh -c wrapper (PID 1), and when the script eventually exec another process, that process becomes the main process and will receive the docker stop signal.
entrypoint.sh:
...
# run the external bash script if it exists
BOOT_SCRIPT="/boot.sh"
if [ -f "$BOOT_SCRIPT" ]; then
printf ">> executing the '%s' script\n" "$BOOT_SCRIPT"
source "$BOOT_SCRIPT"
fi
# start something here
...
The boot.sh is used by child containers to execute something else that the child container wants. Everything is fine, my containers work like a charm.
ps axu in a child container:
PID USER TIME COMMAND
1 root 0:00 {entrypoint.sh} /bin/bash /entrypoint.sh
134 root 0:25 /usr/lib/jvm/java-17-openjdk/bin/java -server -D...
...
421 root 0:00 ps axu
Before stopping the container I need to run some commands automatically so I created a shutdown.sh bash script. This script works fine and does what I need. But I execute the shutdown script manually this way:
$ docker exec -it my-container /bin/bash
# /shutdown.sh
# exit
$ docker container stop my-container
I would like to automate the execution of the shutdown.sh script.
I tried to add the following to the entrypoint.sh but it does not work:
trap "echo 'hello SIGTERM'; source /shutdown.sh; exit" SIGTERM
What is wrong with my code?
Your help and comments guided me in the right direction.
I went through again the official documentations here, here, and here and finally I found what was the problem.
The issue was the following:
My entrypoint.sh script, which kept alive the container executed the following command at the end:
# start the ssh server
ssh-keygen -A
/usr/sbin/sshd -D -e "$#"
The -D option runs the ssh daemon in a NOT detach mode and sshd does not become a daemon. Actually, that was my intention, this is the way how I kept alive the container.
But this foreground process prevented to be executed properly the trap command. I changed the way how I started the sshd app and now it runs as a normal background process.
Then, I added the following command to keep alive my docker container (this is a recommended best practice):
tail -f /dev/null
But of course, the same issue appeared. Tail runs as a foreground process and the trap command does not do its job.
The only way how I can keep alive the container and let the entrypoint.sh runs as a foreign process in docker is the following:
while true; do
sleep 1
done
This way the trap command works fine and my bash function that handles the SIGINT, etc. signals runs properly when the time comes.
But honestly, I do not like this solution. This endless loop with a sleep looks ugly, but I have no idea at the moment how to manage it in a nice way :(
But this is another question that not belongs to this thread (but could be great if you can suggest my a better solution).
In my Travis CI, part of my verification is to start a docker container and verify that it doesn't fail within 10 seconds.
I have a yarn script docker:run:local that calls docker run -it <mytag> node app.js.
If I call the yarn script with timeout from a bash shell, it works fine:
$ timeout 10 yarn docker:run:local; test $? -eq 124 && echo "Container ran for 10 seconds without error"
This calls docker run, lets it run for 10 seconds, then kills it (if not already returned). If the exit code is 124, the timeout did expire, which means the container was still running. Exactly what I need to verify that my docker container is reasonably sane.
However, as soon as I run this same command from within a script, either in a test.sh file called from the shell, or if putting it in another yarn script and calling yarn test:docker, the behaviour is completely different. I get:
ERRO[0000] error waiting for container: context canceled
Then the command hangs forever, there's no 10 second timeout, I have to ctrl-Z it and then kill -9 the process. If I run top I now have a docker process using all my CPU forever. If using timeout with any other command like sleep 20 && echo "Finished sleeping", this does not happen, so I suspect it may have something to do with how docker works in interactive mode or something, but that's only my guess.
What's causing timeout docker:run to fail from a script but work fine from a shell and how do I make this work?
Looks like running docker in interactive mode is causing the issue.
Run docker in detached more by removing the -it and allowing it to run in default detached mode or specify -d instead of -it and so:
docker run -d <mytag> node
or
docker run <mytag> node
I've downloaded and set up elasticsearch on an EC2 instance that I use to run Jenkins. I'd like to use Jenkins to run some unit tests that use the local elasticsearch.
My problem is that I haven't found a way on how to start the elasticsearch locally and run the tests after, since the script doesn't proceed after starting ES, because the job is not killed or anything.
I can do this by starting ES manually through SSH and then building a project with only the unit tests. However, I'd like to automate the ES launching.
Any suggestions on how I could achieve this? I've tried now using single "Execute shell" block and two "Execute shell" blocks.
It is happening because you starting elasticsearch command in blocking way. It means command will wait until elasticsearch server is shutdown. Jenkins just keep waiting.
You can use following command
./elasticsearch 2>&1 >/dev/null &
or
nohup ./elasticsearch 2>&1 >/dev/null &
it will run command in non-blocking way.
You can also add small delay to allow elasticsearch server start
nohup ./elasticsearch 2>&1 >/dev/null &; sleep 5
I am trying to launch a zookeeper server and a bootstrap process for my API and another process which has to consume data written by my API to the zookeeper server using docker-compose.
Once I execute docker-compose up, my zookeeper server launches successfully and my bootstrap API is able to connect to it and is able to write data successfully.
The constraint here is that my 2nd process needs to wait for my API to write the data to the zookeeper otherwise it results in an exception since no node would have been created by the API until then. Thus, in the command section of my docker-compose.ml file, I executed a bash command and made my bootstrap API run infinitely by adding a while loop so that the program doesn't exit and also added a sleep statement in my second process so that it waits until the API has done its job. (sort of race condition handling).
From what I understood, docker-compose handles the ordering using "links" in the docker-compose.yml file but doesn't handle the state of individual processes which need to be launched. By state I mean, that the 2nd process needs to start once 1st process has exited successfully.
Here's my docker-compose.yml file -
zookeeper:
image: xyz.com/temp
ports:
- "10000:2181"
bootstrapapi:
image: xyz.com/temp1
command: /bin/bash -c "cd /code; make test_data ZK_HOSTS=zookeeper:2181 CLUSTER=cluster ; while [ true ]; do sleep 5; done"
volumes:
- .:/test
links:
- zookeeper
xyz:
image: def.com/temp2
command: /bin/bash -c "sleep 10;python -m test --zk-hosts=zookeeper:2181 --zk-cluster-path=cluster "
links:
- zookeeper
If you need any more details, I'll be glad to let you know. Thanks in advance.
The way I solved my problem was to use shared volumes between containers, where the first process creates a new file on this volume after it has done it's job and the other process runs a loop to check if this file has been created. Once the 2nd process detects that, it starts its job of consuming that data.
That was a simple way of solving this producer-consumer race condition scenario in docker-compose using bash scripting in docker-compose. Hope this can helpful.
I'm trying to set up a script to execute tests for my node.js program, which uses MongoDB. The idea is that I want to have a script I can run that:
Starts the MongoDB process, forked as a daemon
Pre populates the database with some test data
Starts my node server with forever, so it runs as a daemon
Run my tests
Drop the test data from the database
I have a crude script that performs all these steps. My problem is that MongoDB takes a variable amount of time to set up, which results in sleep calls in my script. Consequently it only works occasionally.
# the directory of this script
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
# launch mongodb.
$DIR/../../db/mongod --fork --logpath ./logs/mongodb.log --logappend --dbpath ./testdb/ --quiet
# takes a bit of time for the database to get set up,
# and since we make it a daemon process we cant run the tests immediately
sleep 1
# avoid EADDRINUSE errors because existing node servers are up.
killall node &> /dev/null
# start up our node server using a test database.
forever start $DIR/../main.js --dbname=testdb --logpath=test/logs/testlog.log
# takes a bit of time for node to get set up,
# and since we make it a daemon process we cant run the tests immediately
sleep 1
# run any database setup code (inject data for testing)
$DIR/../../db/mongo 127.0.0.1:27017/testdb $DIR/setup.js --quiet
# actually run the tests
node $DIR/tests.js
# kill the servers (this could be a little less heavy handed...)
killall node &> /dev/null
killall forever &> /dev/null
# finally tear down the database (drop anything we've added to the test db)
$DIR/../../db/mongo 127.0.0.1:27017/testdb $DIR/teardown.js --quiet
# and then shut mogodb down
kill -2 `ps ax | grep mongod | grep -v grep | awk '{print $1}'`
What is the best way to go about what I'm trying to do? Am I going down a rabbit hole here, or am I missing something in the MongoDB docs?
Ask yourself what your purpose of your testing is: Is it to test the actual DB connection in your code, or focus on whether your code handles and processes data from the DB correctly?
Assuming your code is in Javascript, if you strictly want to test that your code logic is handling data correctly, and are using a MongoDB wrapper object class (i.e. Mongoose), one thing you may be interested to add to your workflow is the creation and running of spec tests using the Jasmine test suite.
This would involve writing test-code, mocking-up test data as javascript objects. Yes, that means any actual data from the DB itself will not be involved in your spec tests. Since after all, your primary purpose is to test if your code is logically working right? It's your project, only you know the answer :)
If your main problem is how to find out when mongod actually starts why don't you write a script which will tell you that ?
For example you can write a until loop and check if the client can connect properly on the mongo server, based on the return value. For example instead of using sleep 1 use something like that:
isMongoRunning=-1
until [[ ${isMongoRunning} -eq 0 ]]; do
sleep 1
$DIR/../../db/mongo 127.0.0.1:27017/testdb $DIR/empty.js --quiet
isMongoRunning=${?}
done
This loop will end only after mongodb start.
Also of you would like to improve the stopping of mongodb add --pidfilepath to your mongod execution line so you can easily find which process to terminate.