Different profile per spring boot application instance in cloud foundy - spring-boot

Is it possible to programmatically set a different profile for every instance of a spring boot application deployed in cloud foundry using for example ConfigurableEnvironment and cloud foundry instance index?

I would suggest that you look into using tasks.
https://docs.cloudfoundry.org/devguide/using-tasks.html
Here's roughly how this would work.
Run cf push to deploy your application to CF. If you do not actually have an application to run, that is OK. You just need to push the app and start it once so that it stages and creates a droplet. After that, you can run cf stop to shutdown the instance (note: cf push --no-start won't work, because the app needs to stage at least once).
Run cf run-task <app> <command>. This is where you kick off your batch jobs. The <command> argument is going to be the full command to run your batch job. In this, you can include an argument to indicate the profiles that should be used. Ex: --spring.profiles.active=dev,hsqldb.
https://docs.spring.io/spring-boot/docs/current/reference/html/boot-features-profiles.html
You need to use the full or relative path to the java exceutable because the Java buildpack does not put it onto the path. If you wanted to run a task that printed the version of the JVM, you'd use this command '.java-buildpack/open_jdk_jre/bin/java -version'.
Ex: cf run-task <app> '.java-buildpack/open_jdk_jre/bin/java -version'
See this SO post though for drawbacks of hardcoding the path to the Java executable in your command. My suggestion would be to take the command that's listed when you run cf push and modify to your needs.

Related

Post deployment script that reads environment variable inside deployed pod

I have kubernetes job whose responsibility is to post a jar to flink (via flink api) and run the Jar.
In response it is going to get a Job id from flink api, which i need to use in test script to see if my job is running or not. The job is going to run inside the container/pod spawned by job.yaml and test script is not going to run from the same pod/container spawned by job.yaml.
If i save this job id as environment variable inside the container/pod spawned by job.yaml, is there a way to access that environment variable outside the pod. I am not even allowed manually to get into the container (to print environment variables) using kubectl exec -it podname /bin/bash/ command saying I cant get in inside a completed (not running) Pod..So I am not sure if i can do the same via script..
Are there any alternatives for me to access the job id in test scripts by making use of environment variable i set inside the container/pod (spawned by job.yaml)..?
In summary is there a way to access the environment variable i set inside Pod, by using a script that runs out side of the pod?
Thank you...
Pavan.
No you can't use environment variable for that.
You could add an annotation from inside your pod.
For that you will need to setuo:
Service account to be able to annotate your self
Downward API
Then you will be able to access it from another pod/container

Where to set env variables for local Spring Cloud Dataflow?

For development, I'm using the local Spring Cloud Dataflow server on my Mac, though we plan to deploy to a Kubernetes cluster for integration testing and production. The SCDF docs say you can use environment variables to configure various things, like database configuration. I'd like my registered app to use these env variables, but they don't seem to be able to see them. That is, I start the SCDF server by running its jar from a terminal window, which can see a set of environment variables. I then configure a stream using some Spring Cloud stream starter apps and one custom Spring Boot app. I have the custom app logging System.getenv() and it's not showing the env variables I need. I set them in my ~/.bashrc file, which I also source from ~/.bash_profile. That works for my terminal windows and most other things that need environment, but not here. Where should I be defining them?
To the points in the first answer and comments, they sound good, but nothing works for me. I have an SQS Source that get's its connection via:
return AmazonSQSAsyncClientBuilder.standard()
.withRegion(Regions.US_WEST_2.getName()))
.build();
When I deploy to a Minikube environment, I edit the sqs app's deployment and set the AWS credentials in the env section. Then it works. For a local deployment, I've now tried:
stream deploy --name greg1 --properties "deployer.sqs.AWS_ACCESS_KEY_ID=<id>,deployer.sqs.AWS_SECRET_ACCESS_KEY=<secret>"
stream deploy --name greg1 --properties "deployer.sqs.aws_access_key_id=<id>,deployer.sqs.aws_secret_access_key=<secret>"
stream deploy --name greg1 --properties "app.sqs.AWS_ACCESS_KEY_ID=<id>,app.sqs.AWS_SECRET_ACCESS_KEY=<secret>"
stream deploy --name greg1 --properties "app.sqs.aws_access_key_id=<id>,app.sqs.aws_secret_access_key=<secret>"
All fail with the error message I get when credentials are wrong, which is, "The specified queue does not exist for this wsdl version." I've read the links, and don't really see anything else to try. Where am I going wrong?
You can pass environment variables to the apps that are deployed via SCDF using application properties or deployment properties. Check the docs for a description of each type.
For example:
dataflow:> stream deploy --name ticktock --properties "deployer.time.local.javaOpts=-Xmx2048m -Dtest=foo"

spring-boot launch-script: how to avoid pid_folder identity-subdirectory?

We are using spring-boot with the embedded launcher-script in service mode, to have daemonized/init.d behavior.
We however do not have an /etc/init.d symlink to the spring-boot jar as that would require using sudo. we avoid sudo to pass a profile-environmental like -Dspring.profiles.active=$APP_PROFILE in the JAVA_OPTS
(this won't work when started via sudo but defined in /home/appuser/.bashrc (?) )
We have this directory-layout with some indirections. basically app.jar => current/app.jar => build-xx/app.jar
appuser#host:~/apps/services$ ls
app.jar -> /home/appuser/apps/services/current/services-1.0-SNAPSHOT.jar
current -> /home/appuser/apps/services/services-1298
services-1298
When starting the application with app.jar start the launch-script generates an additional pid-subdirectory in the pid-folder based on the "identity" of the program. For us this can look like this:
/home/appuser/apps/services/run/services-1.0-SNAPSHOT_homeappuserappsservicesservices-1298/services.pid
Unlike when used with an symlinked /etc/init.d which gets special treatment and the pid-subdir services-1.0-SNAPSHOT_homeappuserappsservicesservices-1298 is omitted/stays stable.
This dynamic pid-subdir makes it very hard for us to check the daemon's status or start/stop during deployment because you have to always get the sequence right and nobody is stopping you from starting a process twice (the old instance and now a new instance with a new identity-subdir).
So, does anyone know why this pid-subdir-identity stuff must exist and what would be our best way to deal with it?
Do we have a bad setup?
Any advice appreciated.
You can control the identity by using the APP_NAME environment variable.
I'd recommend configuring your service's environment variables using a .conf file next to the jar file. For example, if your app is called app.jar, you conf file should be named app.conf and be placed in the same directory as the jar. You can then configure APP_NAME and JAVA_OPTS etc for your application. This should allow you to use init.d if you so wish.

How can I properly configure a gcloud account for my Gradle Docker plugin when using GCR?

Our containers are hosted using Google Container Registry, and I am using id "com.bmuschko.docker-java-application" version "3.0.7" to build and deploy docker containers. However, I run into permission issues whenever I try to pull the base image or push the image to GCR (I am able to get to the latter step by pulling the image and having it available locally).
I'm a little bit confused by how I can properly configure a particular GCloud account to be used whenever issuing any Docker related calls over a wire using the plugin.
As a first attempt, I've tried to create a task that precedes and build or push commands:
task gcloudLogin(type:Exec) {
executable "gcloud"
args "auth", "activate-service-account", "--key-file", "$System.env.KEY_FILE"
}
However, this simple wrapper doesn't work as desired. Is there currently a supported way to have this plugin work with GCR?
Got in touch with the maintainers of the gradle docker plugin and we have found this to be a valid solution.

Where do I set the script MODE

I want to create an init.d service for my spring-boot app.
I want it to be stopped by default and only run when the user runs service my-app start.
Where do I set MODE=service as described on below page.
http://docs.spring.io/spring-boot/docs/current/reference/html/deployment-install.html
Found I needed to add the embeddedLaunchScriptProperties option to the Spring Boot Maven

Resources