Application dependencies (another apps) - spring

We need to deploy our 4 applications (3 spring boot apps and 1 zookeper) with docker stack. As our DevOps guy told us, there is no way how to define in docker stack which application will be depending on another like in docker compose, so we as developers need to solve it in code.
Can you tell me how to do that or what is the best way? One of our applications have to be started as first because that app manage database (migration and so on). Next can start other applications when database is prepared. Any ideas? Thanks.

if you want to run all the 4 applications in one docker container, you can refer to this postRun multiple services in a container
if you want to docker compose the 4 applications, you can refer to this post startup order, it use depends_on your other app images
no matter what the way is, you must write a script to check if your first app has already finish to manage the database, you can refer wait-for-postgres.sh to learn how to use sleep in shell to repeatedly check your first app status
the more precisely way i can suggest one is for example:
put a shared static variable to false
public static boolean is_app_start = false;
when you finish to manage your database, change this value to true;
write a #RequestMapping("/is_app_start") in your controller to return this value
use curl in your shell script to check the value

Related

Running bash script on GCP VM instance programmatically

I've read multiple posts on running scripts on GCP VMs but unfortunately could not find an answer that would satisfy my needs.
I have a Go application and I'm looking for a way to run a bash script on a VM instance programatically.
I'm using a Google Cloud Golang SDK which allows me to fetch VM instance info. Unfortunately SDK does not contain a functionality that allows running a bash script on a specific instance(unlike an Azure Cloud SDK for example).
Options I've found:
Google Cloud Compute SDK has an option to set a startup script, that
will run every time an instance is restarted.
Add instance-level public SSH key. Establish an SSH connection and
run a script using Go SSH client.
Problems:
Obviously startup script will require an instance reboot and this is not possible in my use case.
SSH might be also problematic, in case instance is not running SSH
daemon or SSH port is not open. Also, SSH daemon config does not
permit root login by default(PermitRootLogin might be false), thus
script might be running on a non privileged user, making this option not
suitable either.
I should probably note that I am not authorised to change configuration of those VMs (for example change ssh daemon conf to permit root login), I can just use a token based authentication to access them, preferably through SDK, though other options are also possible as long as I am not exposing the instance to additional risks.
What options do I have? Is this even doable? Am I missing something?
Thanks!
As said by Kolban, there is no such API to trigger from outside a bash inside the VM. The best solution is to deploy a webserver (a REST API) that call the bash and to expose it (externally or internally).
But you can also cheat. You can create a daemon on your VM that you run with a startup script and that listen a custom metadata; let's say check it every seconds.
When the metadata is updated, the daemon can perform actions. You can imagine that the metadata contain the script to run with the parameters. At the end of the run, the metadata is cleaned by the daemon.
So now, to run your bash, call the setMetadata Api. It's not out of the box, but you can have something similar of what you expected.
Think of GCP as providing the virtual machine infrastructure such as compute, memory, disk and networking. What runs when the machine boots is between you and the machine image. I am hearing you say that you want to run a bash script within the VM. That is outside of the governance of GCP. GCP will only affect the operation and existence of the environment. If what you want to happen is run a script within the VM programatically you will need to run some form of demon inside the VM that can be signaled to run such a script. This could be a web server such as flask or express, it could be your SSH server or it could be some other technology you choose.
The core thing I think you were looking for was some GCP API that, when called, would run a script within the Compute Engine. I'm going to say that there is no such API.

Docker and rancher

i never really understood how to start a docker and how to maintain it alive.
I have a question, so when you start a docker in the terminal you must provide a command for the docker so it maintains alive, and when you dont provide a service it restarts everytime, you can provide the /bin/bash so it maintains open. (Could you show me how to do it the right way, maintain it open with bash ?)
When it comes to rancher, when you create a new docker you can provide the command too, but if you dont the docker won't restart it maintains alive, so what does this means, that it have default command ? (/bin/bash)? What command does exactly executes rancher to start the docker?
thank you all
It is probably best if you read some about docker, to get the various concepts clear. From your use of "a docker", it seems that you don't really have all the pieces yet for an easy understanding.
A quick layout would be that you have
Image. I have seen this compared to a 'class' in programming
Container. In the same comparison, this would be an object: an instance of a class.
If you want to run something with docker, you start a container from an image. Just like if you want to create an object, you create one from a class. (lets not take this comparison/simili too far)
Now a containers purpose is to run something, rather, to run a single something. So "keeping a docker open" is not something you 'should want' What you want is to run, for instance, a server. Or a script.
Every container runs a single process (or should run one). As the 'official' usecase is not 'create a virtual server you can play around', it might behave strange or complicated if you want to have place to ssh to and not run a specific thing.
This also means you don't want to run any services as a background: if you run apache, you want to run it not as a daemon, but just run it: that's what the docker container is for. If you need to run something else (for instance, a database server) you would start a second container.
There might be exceptions for this, but to get your head around the why stuff works as it does, you should probably start somewhat religiously with these 'rules', and from that point go on.

How to run an application inside docker safely

I want to run an arbitrary application inside a docker container safely, like within a vm. To do so I save the application (that I donwloaded from the web and that I don't trust) inside a directory of the host system and I create a volume that maps this directory with the home directory of the container and then I run the application inside the container. Are there any security issues with this approach? Are there better solutions to accomplish the same task?
Moreover, to install all the necessary dependencies, I let to execute an arbitrary script inside a bash terminal running inside the container: could this be dangerous?
To add to #Dimitris answer. There are other things you need to consider.
There are certain things container do not contain. Docker uses namespaces to alter process view of the system.i.e N/W Shared memory etc. But you have to keep in mind it is not like KVM. Docker do talk to kernel directly unlike KVM(Vms) like /proc/sys.
So if the arbitrary application tries to access kernel subsystems like Cgroups , /proc/sys , /proc/bus etc. you could be in trouble. I would say its fine unless its a multi-tenant system.
As long as you do not give the application sudo access you should be good to try it out.
Dependencies are better off defined in the Dockerfile in a clear way for other to see. Opting to run a script instead will also do the job but it's more inconvenient.

EC2 init.d script - what's the best practice

I'm creating an init.d script that will run a couple of tasks when the instance starts up.
it will create a new volume with our code repository and mount it if it doesn't exist already.
it will tag the instance
The tasks above being complete will be crucial for our site (i.e. without the code repository mounted the site won't work). How can I make sure that the server doesn't end up being publicly visible? Should I start my init.d script with de-registering the instance from the ELB (I'm not even sure if it will be registered at that point), and then register it again when all the tasks finished successfully?
What is the best practice?
Thanks!
You should have a health check on your ELB. So your server shouldn't get in unless it reports as happy. And it shouldn't report happy if the boot script errors out.
(Also, you should look into using cloud-init. That way you can change the boot script without making a new AMI.)
I suggest you use CloudFormation instead. You can bring up a full stack of your system by representing it in a JSON format template.
For example, you can create an autoscale group that has an instances with unique tags and the instances have another volume attached (which presumably has your code)
Here's a sample JSON template attaching an EBS volume to an instance:
https://s3.amazonaws.com/cloudformation-templates-us-east-1/EC2WithEBSSample.template
And here many other JSON templates that you can use for your guidance and deploy your specific Stack and Application.
http://aws.amazon.com/cloudformation/aws-cloudformation-templates/
Of course you can accomplish the same using init.d script or using the rc.local file in your instance but I believe CloudFormation is a cleaner solution from the outside (not inside your instance)
You can also write your own script that brings up your stack from the outside by why reinvent the wheel.
Hope this helps.

How can I launch 10 instances, and tag them at once

I want a single script that can lauch, and tag my instances which I can then use chef to configure them accordingly.
Say my service requires 10 instances, I want to be able to run 10 instances, then tag them according to their role (web, db, app server).
Then once I do that, I can use chef to connect to each one and configure them how i want.
But I'm confused, I know I can launch instances, but how do you wait for them to come online? Do you have to continously loop in some sort of a timer? That seems like a very hacky way to do it!
If you're going to do everything from the outside you do just have to poll to wait for the instance to be ready (which doesn't necessarily mean its ready to use - actual startup completed a little later)
You can also pass user data when you start an instance. Most amis support cloud init, and will interpret the data passed as a shell script if in the right format. That shell script could run chef or do other configuration tasks.

Resources