run commands per boot via cloud config - amazon-ec2

I am passing cloud config via --user-data-file argument when starting ec2 instance from canonic ubuntu images.
It works well but the problem is that some of its commands need to run every boot (i.e. when we stop/start or reboot the instance). Is there a way (or section in cloud config) that allows to describe commands that should run on every boot, not only upon instance creation?

You do not need cloud-init to do boot time command execution. Things you want to look at is crontab, upstart scripts, /etc/rc.local and /etc/init.d. There are a lot of options. It depends on the distribution as well.

Related

Best way to start multiple dependent spring boot microservices locally?

Currently my team maintains many spring boot microservices. When running them locally, our workflow is to open a new IntelliJ IDEA window and pressing the "run" button for each microservice. This does the same thing as typing gradle bootRun. At a minimum each service depends on a config server (from which they get their config settings) and a eureka server. Their dependencies are specified in a bootstrap.yml file. I was wondering if there is a way to just launch one microservice (or some script or run configuration), and it would programatically know which dependencies to start along with the service I am testing? It seems cumbersome to start them the way we do now.
If you're using docker then you could use docker compose to launch services in a specific order using the depends_on option. Take a look here and see if that will solve your problem.
https://docs.docker.com/compose/startup-order/

There are three instances of the application on the server. but we want to execute the cronjob schedular on one instance of the application

There are three instances of the application on the server. but we want to execute the cronjob schedular logic on one instance of the application.
Using Spring-data, couchbase repository with couchbase database. is there any simple solution to my problem please suggest me. thanks in advance . i suffer from this problem from many days.
Facing similar situations with multiple spring boot instances, we schedule the cron externally, this can be a simple cron script that executes a curl, or a specific external scheduler app/instance. Your load balancer will pick an instance to run on.
You could also consider using quartz or shedlock with couchbase to manage without external trigger.

Spring task:scheduled or #Scheduler to restrict a Job to run in multiple instance

I have one #Scheduler job which runs on multiple servers in a clustered environment. However I want to restrict the job to run in only in one server and other servers should not run the same job once any other server has started it .
I have explored Spring Batch has lock mechanism using some Database table , but looking for any a solution only in spring task:scheduler.
I had the same problem and the solution what I implemented was a the Lock mechanism with Hazelcast and to made it easy to use I also added a proper annotation and a bit of spring AOP for that. So with this trick I was able to enforce a single schedule over the cluster done with a single annotation.
Spring Batch has this nice functionality that it would not run the job with same job arguments twice.
You can use this feature so that when a spring batch job kicks start in another server it does not run.
Usually people pass a timestamp as argument so it will by pass this logic, which you can change it.

Spring Boot executable JAR cannot open privileged port as systemd service

I'd like to run my Spring Boot application as a systemd service listening on port 443 directly (not behind nginx/httpd). To do this, I am specifying AmbientCapabilities=CAP_NET_BIND_SERVICE in the systemd service unit like so:
[Service]
User=myapp
ExecStart=/usr/lib/myapp.jar
SucessExitStatus=143
AmbientCapabilities=CAP_NET_BIND_SERVICE
However, the capability seems to be lost when the embedded launch script runs Java. As a test, I substituted my own one-line bash script java -jar /usr/lib/myapp.jar to see if the capability was lost when running any bash script, but that worked. So as far as I can tell, something in the embedded launch script is causing the problem.
With that said, I'm most likely going to drop the embedded launch script in favor of launching Java directly from the systemd service unit, but in case either (1) others have the same problem or (2) this is an actual bug and not just me missing something I did want to at least ask the question.
Cheers!

best way to read a file in Spring Boot

I have a spring boot application that currently runs in embedded Tomcat. I have a file, states.csv, that I want to parse on startup and seed my states database table (I tried via liquibase but that refuses to work).
I put the file in resources/main/ and that appears to work fine. My question is, if I decided against embedded Tomcat in the future (say moving to AWS or a regular Tomcat), is this still the best location to keep files for use?
I don't want to code myself into a corner if there is a better way to do this.
This depends entirely on how you're reading the file. As long as you're grabbing it out of the classpath, you should be fine. (And I've run single-jar applications on both basic AWS VMs and Cloud Foundry on EC2 with no difficulty at all.)

Resources