Post deployment script that reads environment variable inside deployed pod - shell

I have kubernetes job whose responsibility is to post a jar to flink (via flink api) and run the Jar.
In response it is going to get a Job id from flink api, which i need to use in test script to see if my job is running or not. The job is going to run inside the container/pod spawned by job.yaml and test script is not going to run from the same pod/container spawned by job.yaml.
If i save this job id as environment variable inside the container/pod spawned by job.yaml, is there a way to access that environment variable outside the pod. I am not even allowed manually to get into the container (to print environment variables) using kubectl exec -it podname /bin/bash/ command saying I cant get in inside a completed (not running) Pod..So I am not sure if i can do the same via script..
Are there any alternatives for me to access the job id in test scripts by making use of environment variable i set inside the container/pod (spawned by job.yaml)..?
In summary is there a way to access the environment variable i set inside Pod, by using a script that runs out side of the pod?
Thank you...
Pavan.

No you can't use environment variable for that.
You could add an annotation from inside your pod.
For that you will need to setuo:
Service account to be able to annotate your self
Downward API
Then you will be able to access it from another pod/container

Related

How to run pre/post job scripts on self-hosted GitHub Actions runner

Problem
I am trying to use a Windows Docker container to run GitHub Actions.
I want to run scripts before and after the job (e.g. to clean the directory).
I have successfully done this before on a computer not running docker, so I figured the same should work in docker.
What I have tried
I found here that you can do that using Environment Variables.
I used the following two commands in command prompt to set the environment variables.
Pre-Job Script:
setx ACTIONS_RUNNER_HOOK_JOB_STARTED C:\actions-runner-resources\scripts\pre-post-build\pre-run-script.ps1
Post-Job Script:
setx ACTIONS_RUNNER_HOOK_JOB_COMPLETED C:\actions-runner-resources\scripts\pre-post-build\post-run-script.ps1
The scripts do not run.
I have tried restarting the docker container.
I have tried restarting the actions runner service.
I am new to docker, so I am wondering if I am doing something wrong with the environment variables that does not work with docker.
How do I get the actions runner to run pre/post job scripts in docker?
You can safely add them to your environment variable by doing this recommended method;
Inside the actions-runner directory, locate the .env file, edit it by adding your environment variable. Save and restart the runner service.

Is it Possible to Have Docker Compose Read from AWS Secrets Manager?

I currently have a bash script that "simulates" an ECS task by spinning up 3 containers. Some of the containers pull their secrets and configuration overrides from Secrets Manager directly(e.g. it's baked into the container code), while others have configuration overrides that are being done with Docker Environment variables which requires the Secrets be retrieve first from ASM, exported to variables, then starting the container with the environment variables just exported. This works fine and this is done just for developers to test locally on their workstations. We do not deploy with Docker-Compose. The current bash script makes calls out to AWS and exports the values to Environment variables.
However, I would like to use Docker Compose going forward. The question I have is "Is there a way for Docker Compose to call out to AWS and get the secrets?"
I don't see a native way to do this with Docker Compose, so I am thinking of going out and getting ALL the secrets for ALL the containers. So, my current script would be modified to do this:
The Bash the script would get all the secrets and export these values to environment variables.
The script would then call the Docker-compose yaml and reference the exported variables created in step 1 above.
It would be nice if I didn't have to use the bash script at all, but I know of no intrinsic way of pulling secrets from Secrets Manager from the Docker-Compose yaml. Is this possible?

What is a foolproof way to make environment variables available to a script?

I have this script on an Ubuntu VM that uses an environment variable that is set in a script in /etc/profile.d/appsetup.sh.
The variable is used in my script, server-up.sh, to start my Java app:
export path=$KEYSTORE_PATH
java -jar -Dsecurity.keystore.path=$path [jarfile]
If I echo the variable, it works:
$ echo $KEYSTORE_PATH
/etc/ssl/certs
And if I run the script on my own (sudo sh server-up.sh) it runs and uses the environment variable just fine.
However, when the script is executed from Jenkins' "Execute Shell" step (on the same VM), it apparently can't access the environment variable, even though supposedly it's available system-wide.
I've tried setting the owner of server-up.sh to both root and jenkins and Jenkins runs it either way, but in neither case does it get the environment variables. In Jenkins, I also tried using the command sudo -E /[path]/server-up.sh but then the job fails and an error says sudo: sorry, you are not allowed to preserve the environment.
I've googled a dozen times for various things, but almost everything that comes up is people asking how to set environment variables in Jenkins, and I don't need to do that; I just want a script that Jenkins can execute have access to system environment variables.
What do I need to do to get this working?
Make a small change to allow the /etc/profile.d/appsetup.sh script to output the variable to a file, where the Jenkins job can access this variable to create an environment variable available for your job to run successfully.
I don't think the context and needs are explained sufficiently well to properly answer the question with a here's how you do it.
On a server, jenkins.war launches from a shell (or a root shell which invokes a shell script with commands which launches jenkins), which has an environment and to which you can set and pass parameters. Those exist in the context of the jenkins.war process. If you run from a daemon (initd / systemd) you get a non-in=teractive shell, which is set differently to your normal shell.
Your Jenkins will typically have Nodes launching agents on remote servers. Those are typically launched via a non-interactive shell (so no user .profile settings).
Then the jobs themselves run on one of the agents where the executor launches a shell for the job execution. Sub-shells may be launched for specific steps.
The two context you mention sudo sh server-up.sh and Jenkins' "Execute Shell" step (on the same VM), even on the same VM do not inherit the same environment as the Node is launched on it's own process using a non-interactive shell and is not aware of anything in your server-up.sh script; it (generally) just gets the /etc/profile.
You have options. You can set Global variables within Jenkins: ${JENKINS_URL}/configure
Global Properties
[ X ] Environment variables
[ X ] Prepare jobs environment (requires Env Inject plugin)
The same options also exist at the Node level
You can install the slaves-setup plugin, which allows you some customization when launching agents (aka slaves).
You can install the Environment Injector plugin, which adds the previously mentioned Prepare jobs environment feature.
It also adds jobs specific configuration options for:
[ X ] Prepare an environment for the run
Plus under the Build Environment section,
[ X ] Inject environment variables to the build process
and
[ X ] Inject passwords to the build as environment variables
Those are encrypted, and I believe are masked
Finally, you can add a build step to Inject environment variables, useful if you need to have different values for different steps.
BUT it's certs in a keystore!
Given that you also mention what you are trying to make available is $KEYSTORE_PATH=/etc/ssl/certs, I wonder if you've explored using the Credentials plugin? Is supports a wide variety of credential types, including:
Password
Username and password
SSH private key
Public Certificate and private key
Binary blob data
That OAuth thingy
The obvious benefit to using this approach vs cooking your own is it's been designed to work securely with Jenkins so your secrets don't get inadvertently exposed. Aside from the extensive documentation on the plugin site, there's more in the book on Using credentials, using them in a pipeline which also mentions the Snippet generator will stub it for you, and on the Cloudbees site - Injecting secrets into builds. You can probably find plenty of help here in S/O and DevOps.
You may also wish to explore the newly introduced Git credentials binding for sh, bat, and powershell, though not sure that's applicable in your case.

Different profile per spring boot application instance in cloud foundy

Is it possible to programmatically set a different profile for every instance of a spring boot application deployed in cloud foundry using for example ConfigurableEnvironment and cloud foundry instance index?
I would suggest that you look into using tasks.
https://docs.cloudfoundry.org/devguide/using-tasks.html
Here's roughly how this would work.
Run cf push to deploy your application to CF. If you do not actually have an application to run, that is OK. You just need to push the app and start it once so that it stages and creates a droplet. After that, you can run cf stop to shutdown the instance (note: cf push --no-start won't work, because the app needs to stage at least once).
Run cf run-task <app> <command>. This is where you kick off your batch jobs. The <command> argument is going to be the full command to run your batch job. In this, you can include an argument to indicate the profiles that should be used. Ex: --spring.profiles.active=dev,hsqldb.
https://docs.spring.io/spring-boot/docs/current/reference/html/boot-features-profiles.html
You need to use the full or relative path to the java exceutable because the Java buildpack does not put it onto the path. If you wanted to run a task that printed the version of the JVM, you'd use this command '.java-buildpack/open_jdk_jre/bin/java -version'.
Ex: cf run-task <app> '.java-buildpack/open_jdk_jre/bin/java -version'
See this SO post though for drawbacks of hardcoding the path to the Java executable in your command. My suggestion would be to take the command that's listed when you run cf push and modify to your needs.

Set global environment variables inside Xcode build phase run script

I'm using Jenkins to do continuous integration builds. I have quite a few jobs that have much of the same configuration code. I'm in the midst of pulling this all out into a common script file that I'd like to run pre and post build.
I've been unable to figure out how to set some environment variables within that script, so that both the Xcode build command, and the Jenkins build can see them.
Does anyone know if this is possible?
It is not possible to do exactly what you ask. A process cannot change the environment variables of another process. The pre and post and actual build steps run in different processes.
But you can create a script that sets the common environment variables and share that script between all your builds.
The would first call your shell to execute the commands in the script and then call xcodebuild:
# Note the dot in the beginning of the next line. It is not a typo.
. set_environment.sh
xcodebuild myawesomeapp.xcodeproj
The script could look like this:
export VARIABLE1=value1
export VARIABLE2=value2
How exactly your jobs will share the script depends on your environment and use case. You can
place the script in some well-known location on the Jenkins host or
place the script in the version controlled source tree if all your jobs share the same repository or
place the script in a repository of its own and make a Jenkins build which archives the script as a build artifact. All the other jobs would then use Copy Artifact plugin to get a copy of the script from the artifacts of script job.
From Apple's Technical Q&A QA1067 it appears that if you create the file /Users/YOU/.MacOSX/environment.plist and populate it with your desired environment variables that all processes (launched by the user with the environment.plist file in their home dir) will pick up these environment variables. You may need to restart your computer (or just log out and back in) before a newly launched process will pick up the variables.
This article also claims that Xcode will also pass these variables to a build phase script. I have not tested it yet but next time I restart my MacBook I will let you know if it worked.
From http://developer.apple.com/library/mac/#/legacy/mac/library/qa/qa1067/_index.html
Q: How do I set environment for all processes launched by a specific
user?
A: It is actually a fairly simple process to set environment variables
for processes launched by a specific user.
There is a special environment file which loginwindow searches for
each time a user logs in. The environment file is:
~/.MacOSX/environment.plist (be careful it's case sensitive). Where
'~' is the home directory of the user we are interested in. You will
have to create the .MacOSX directory yourself using terminal (by
typing mkdir .MacOSX). You will also have to create the environment
file yourself. The environment file is actually in XML/plist format
(make sure to add the .plist extension to the end of the filename or
this won't work).

Resources