Setting environment variables in Octopus - octopus-deploy

I'd like to specify an environment variable, so I could verify whether it's a stage/production/development environment (either ASPNETCORE_Environment or a custom one). Does Octopus do it by default or do I have to set it up manually?

During the deployment the variable #{Octopus.Environment.Name} will resolve to the name of the Octopus Environment you are deploying to. Following the image below:
If you deploy to environment (1), #{Octopus.Environment.Name} will resolve to Development.
(2) -> Staging
(3) -> Test
OctopusEnvironments
But this variable will only be available within the context of the Octopus deployment. If you are looking to set something more persistent you're gonna have to Powershell your way through it using a script step in your deployment process with the below:
[System.Environment]::SetEnvironmentVariable("MyPassword","P4$$w0rd123", [System.EnvironmentVariableTarget]::Machine)
More info about the above command in this blog post

Octopus has a set of system variables. The one you asking about is:
#{Octopus.Environment.Name}

Related

How to use environment variables in prisma

From this document, Prisma cli try to download binaries from prisma s3. But as my corporate firewall rules this download was blocked, Following this document,I must change source binary file location by using PRISMA_ENGINES_MIRROR variable.
to utilize this variable,I must set environment variables. my build environment is like ElasticBeanstalk,after git push, build will start. from now on,I couldn't configure env variables in build environment. so that I consider to configure and write PRISMA_ENGINES_MIRROR variable to .env files and push them.
Is it possible? and how can I utilize these variable by .env ?
If someone has opinion,please let me know.
Thanks
You can configure environment variables in Elastic BeanStalk by going to
Configuration > Software Configuration > Environment Properties
You can add PRISMA_ENGINES_MIRROR in Environment Properties and it will be picked up by .env

What is a foolproof way to make environment variables available to a script?

I have this script on an Ubuntu VM that uses an environment variable that is set in a script in /etc/profile.d/appsetup.sh.
The variable is used in my script, server-up.sh, to start my Java app:
export path=$KEYSTORE_PATH
java -jar -Dsecurity.keystore.path=$path [jarfile]
If I echo the variable, it works:
$ echo $KEYSTORE_PATH
/etc/ssl/certs
And if I run the script on my own (sudo sh server-up.sh) it runs and uses the environment variable just fine.
However, when the script is executed from Jenkins' "Execute Shell" step (on the same VM), it apparently can't access the environment variable, even though supposedly it's available system-wide.
I've tried setting the owner of server-up.sh to both root and jenkins and Jenkins runs it either way, but in neither case does it get the environment variables. In Jenkins, I also tried using the command sudo -E /[path]/server-up.sh but then the job fails and an error says sudo: sorry, you are not allowed to preserve the environment.
I've googled a dozen times for various things, but almost everything that comes up is people asking how to set environment variables in Jenkins, and I don't need to do that; I just want a script that Jenkins can execute have access to system environment variables.
What do I need to do to get this working?
Make a small change to allow the /etc/profile.d/appsetup.sh script to output the variable to a file, where the Jenkins job can access this variable to create an environment variable available for your job to run successfully.
I don't think the context and needs are explained sufficiently well to properly answer the question with a here's how you do it.
On a server, jenkins.war launches from a shell (or a root shell which invokes a shell script with commands which launches jenkins), which has an environment and to which you can set and pass parameters. Those exist in the context of the jenkins.war process. If you run from a daemon (initd / systemd) you get a non-in=teractive shell, which is set differently to your normal shell.
Your Jenkins will typically have Nodes launching agents on remote servers. Those are typically launched via a non-interactive shell (so no user .profile settings).
Then the jobs themselves run on one of the agents where the executor launches a shell for the job execution. Sub-shells may be launched for specific steps.
The two context you mention sudo sh server-up.sh and Jenkins' "Execute Shell" step (on the same VM), even on the same VM do not inherit the same environment as the Node is launched on it's own process using a non-interactive shell and is not aware of anything in your server-up.sh script; it (generally) just gets the /etc/profile.
You have options. You can set Global variables within Jenkins: ${JENKINS_URL}/configure
Global Properties
[ X ] Environment variables
[ X ] Prepare jobs environment (requires Env Inject plugin)
The same options also exist at the Node level
You can install the slaves-setup plugin, which allows you some customization when launching agents (aka slaves).
You can install the Environment Injector plugin, which adds the previously mentioned Prepare jobs environment feature.
It also adds jobs specific configuration options for:
[ X ] Prepare an environment for the run
Plus under the Build Environment section,
[ X ] Inject environment variables to the build process
and
[ X ] Inject passwords to the build as environment variables
Those are encrypted, and I believe are masked
Finally, you can add a build step to Inject environment variables, useful if you need to have different values for different steps.
BUT it's certs in a keystore!
Given that you also mention what you are trying to make available is $KEYSTORE_PATH=/etc/ssl/certs, I wonder if you've explored using the Credentials plugin? Is supports a wide variety of credential types, including:
Password
Username and password
SSH private key
Public Certificate and private key
Binary blob data
That OAuth thingy
The obvious benefit to using this approach vs cooking your own is it's been designed to work securely with Jenkins so your secrets don't get inadvertently exposed. Aside from the extensive documentation on the plugin site, there's more in the book on Using credentials, using them in a pipeline which also mentions the Snippet generator will stub it for you, and on the Cloudbees site - Injecting secrets into builds. You can probably find plenty of help here in S/O and DevOps.
You may also wish to explore the newly introduced Git credentials binding for sh, bat, and powershell, though not sure that's applicable in your case.

Is there a way to set non-secret environment variables in Github Actions on the Settings page?

As far as I know, there are two ways to set environment variables in Github Actions:
Hardcoding them into YAML file
Adding them as repository secrets on the settings page
Repository secrets page
But what if I don't want them to be secret? On the picture above, SERVER_PREFIX and ANALYTICS_ENABLED shouldn't be secret. Is there a way to set up env variables on the settings page and make them visible? In Travis we had that option.
There isn't an option to add non-secret ENV variables on GitHub page at now.
You can create workflow-scope ENV variables in workflow step.
env:
SERVER_PREFIX: SOME_PREFIX
Then access by:
${{ env.SERVER_PREFIX }}
If you don't need to use them in the Action's YAML, just define your variables in a downloadable file and then use something like curl or wget to get them into your build environment.
For instance, I've done something similar for common CI files and now I've multiple projects running the same project building scripts, their local action is simply like: download an .sh file, run it.
If you need to set up variables in one of your build steps, to be used later by some other action, have a look at this (but I've never tried it myself).

Accessing Meteor Settings in a Self-Owned Production Environment

According to Meteor's documentation, we can include a settings file through the command line to provide deployment-specific settings.
However, the --settings option seems to only be available through the run and deploy commands. If I am running my Meteor application on my own infrastructure - as outlined in the Running on Your Own Infrastructure section of the documentation - there doesn't seem to be a way to specify a deployment-specific settings file anywhere in the process.
Is there a way to access Meteor settings in a production environment, running on my own infrastructure?
Yes, include the settings contents in an environmental variable METEOR_SETTINGS. For example,
export METEOR_SETTINGS='{"privateKey":"MY_KEY", "public":{"publicKey":"MY_PUBLIC_KEY", "anotherPublicKey":"MORE_KEY"}}'
And then run the meteor app as normal.
This will populate the Meteor.settings object has normal. For the settings above,
Meteor.settings.privateKey == "MY_KEY" #Only on server
Meteor.settings.public.publicKey == "MY_PUBLIC_KEY" #Server and client
Meteor.settings.public.anotherPublicKey == "MORE_KEY" #Server and client
For our project, we use an upstart script and include it there (although upstart has a slightly different syntax). However, if you are starting it with a normal shell script, you just need to include that export statement before your node command. You could, for example, have a script like:
export METEOR_SETTINGS='{"stuff":"real"}'
node /path/to/bundle/main.js
or
METEOR_SETTINGS='{"stuff":"real"}' node /path/to/bundle/main.js
You can find more information about bash variables here.

teamcity - 'java' is not recognized as an internal or external command

i'm using teamcity 5.1.5.. trying to build an MSBuild project with an AfterDeploy target which calls a java function..
i get the following error: 'java' is not recognized as an internal or external command
I've tested the java command on the build server and the agent servers and they all run the command..but it seems it fails when running through teamcity.
any ideas?
i've checked the build agent env vars and they seem correctly setup:
Environment variables defined in the
agent configuration file
JAVA_HOME C:\Program
Files\Java\jdk1.6.0_21
JDK_16 C:\Program
Files\Java\jdk1.6.0_21
TEAMCITY_JRE C:\TeamCity\jre
the only thing i notice is that the java.exe are actually in the bin folders here not the root folder.
Build Agent runs from the SYSTEM account by default. SYSTEM account environment variables differ from your normal account which you've used for testing. I suspect that java.exe is not in PATH for the SYSTEM account. Either adjust PATH by adding JDK_HOME\bin to it or configure the Agent service to run from a different account.
Remember you need to restart the build agent service before changes to PATH will take effect.
You can also run your build agent service under a local administrator account (this might be preferred for several reasons), but there is a bug in TeamCity where only the USER environment variables (for example the PATH) are used by the agent, not SYSTEM+USER as normal in Windows.
So if you have a path defined for the user, the system paths are unknown by the agent!
The workaround right now (verified) is to add the user path to the system path and delete the user path (under System/Avanced System Settings/Environment Variables).
Bug here and a (not solved as of 2012-01-29):
http://devnet.jetbrains.net/thread/276957
We run the agent under a normal user account. Java can be found in an interactive session, but not in the TeamCity builds. I had to add the java bin directory to the PATH variable of the user. After a log off/log in, the java command could then be found by the TeamCity builds.

Resources