not able to source a shell / bash file within the .env file - bash

I tried so many key words:
use source
nothing in front of the path
use
# set -a # automatically export all variables
'./../private/some-cred.sh'
# set +a
None of them allow me to import some private creds into the .env file that will be ran by fastlane Dotenv.overload './config/.env.qa'
Current files:
./config/.env.qa
# source './../private/fastlane-cred.sh'
# APPLE_API_KEY_ID=$APPLE_API_KEY_ID_PRIVATE
./private/some-cred.sh
export APPLE_API_KEY_ID_PRIVATE="AAA"
When this setup got loaded into bitrise (which is in ./ios/) via Dotenv.overload './config/.env.qa'
The value will be considered as empty.
Any idea what else I should do, in order to allow me to load some variables from a file into .env properly so its recognized?
NOTE: path is correct. because I had a file ref defined within .env file and it was loaded in the bitrise ENV var correctly. but not for variables.

What is important to know about the "dotenv" paradigm is that the various "dotenv" implementations are mimicking a shell environment instead of actually being a shell environment. As they are run from inside the application, written in a non-shell language - it is far too late at this point to use the shell to setup environment variables. The "dotenv" libraries aren't even actually changing the process's environment - they just use the runtime's API to store data in such a way that other code that uses the runtime's environment access API will see it as if it's coming from the environment.
The fastlane system uses the dotenv gem which uses an internal parser (based on regular expressions) to parse a file that looks like a shell file (containing only variable assignment) but isn't an actual shell file. Everything there that doesn't look like a naive shell variable assignment is ignored - the dotenv file isn't actually a shell script and isn't treated like a shell script, so you can't put shell scripting commands in it and expect it to work.
If you really want to use shell scripting to setup your environment - then use a wrapper shell script to start your environment (or a Makefile) instead of using the application's internal "dotenv" support - which as I explained above, isn't actually a shell script.

Related

Access custom environment variable from Jenkins file

I need to access system environment variable from my Jenkins file. I know that there are some predefined variables (e.g. JOB_NAME or BUILD_NUMBER), but I need to access custom environment variable which I set previously. What are the way to do this? It seems that env.MY_VARIABLE and env['MY_VARIABLE'] but those don't work. I need this to have access to the variable which would be specified during the pipeline build inside a bash script. Probably there are more convenient ways to pass information from bash script to Jenkins file, which called this bash script.
You access environment variables like ${DB_ENGINE} or $DB_ENGINE from bash or in your Groovy job/pipeline DSL script where DB_ENGINE is the custom environment variable you set.
Check documentation.

How can I source a script using a zsh function, without spawning a subshell, to set environment vars?

I'm trying to run a zsh/bash script that writes several values to environment variables, I want these variables available to the parent shell as they are required for several tools we use. If I manually run the script using '. myscript myparamater' I get the expected result, however defining a zsh function to do this without having to use dot notation does not result in the variables being set.
I'm pretty new to zsh/bash scripting, and this has been my first real effort at writing something useful. I am running this on MacOS but would like for it to work in Linux as well, the script my function is sourcing is doing some bash logic and in some cases also executing a third-party executable (really a bash script that calls a java binary). In my script I'm calling the third-party tool directly using its executable name, calling it using exec or dot notation does not seem to work properly.
zsh Function:
function okta-auth {
. okta_setprofile $1
}
My script:
https://gist.github.com/ckizziar/a60a84a6a148a8fd7b0ef536409352d3
Using '. okta_setprofile myprofile' I receive the expected output of the okta_setprofile script, four environment variables (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_DEFAULT_REGION, and AWS_SESSION_TOKEN) are set in my shell.
Using 'okta-auth myprofile' the script feedback is the same as previously, however after execution, the environment variables are not set or updated.
Updated 20190206 to show flow
okta_setprofile flow diagam

How to nicely manipulate environment variables via bash scripts?

I want to add some helper commands to my shell. There are several commands I want to add, and they need to share some information between them. However, since I want a different state for each shell, I can't use files to store the shared information, but have to use environment variables.
This opens up the problem of setting environment variables: to change a variable in my shell and not only in a subprocess, I either need to put my commands in scripts and always source the scripts, or define them as functions and source the file via .bashrc.
I have also defined some auxiliary functions that are used by several of my commands, which I would prefer NOT to have in the scope of my main shell process.
I'm somewhat inexperienced with bash, so my question is:
What is the cleanest way to implement this? Should I put my commands into scripts or into functions? Can I prevent my auxiliary functions from being sourced into the main shell? Is there an easier way to manipulate environment variables?
you can store put your environment variables inside shell file (myEnv.sh). then you can use
source myEnv.sh
to load your env variables according to your need.
you can also use inject this into your main shell scripts
I would recommend you to have .profile files instead of .sh files.
case1.profile , case2.profile and source them whenever needed.
use any one of below method to source the files.
source ~/.case1.profile
or
. ~/.case1.profile

How to set environment variables in tcl?

When I source my .cshrc file and run the Tcl script it is working fine:
$ source .cshrc-sample
$ tclsh invoke.tcl
Following is the .cshrc file:
setenv AUTOTEST "/auto/isbutest/frt"
setenv ATS_EASY "$AUTOTEST"
setenv ATS_USER_PATH "$AUTOTEST"
setenv PATH "${AUTOTEST}/bin:${PATH}"
But when I tried setting the environment variable in Tcl itself and run the script,
I get the following error:
$ tclsh invoke.tcl
can't find package ha
while executing
"package require ha"
(file "invoke.tcl" line 8)
My Tcl script - invoke.tcl:
global env
set env(AUTOTEST) "/auto/isbutest/frt"
set env(ATS_EASY) "/auto/isbutest/frt"
set env(ATS_USER_PATH) "/auto/isbutest/frt"
set env(PATH) "$env(PATH):/auto/isbutest/frt/bin:";
package require ha
How can I run the script without sourcing the .cshrc?
The thing is setting environment variable is not possible using scripts, the lifetime of the variable is within the runtime of the script. When I tried printing the PATH variable it shows what is needed, but I don't know why it is not working. Is there any other workaround for this?
There's a few possibilities. The key things to look at are whether there are any other environment variables that you've missed out, whether the Tcl auto_path global variable is correct immediately before the package require, and whether there is anything else going on.
The easiest way from the Tcl side is to add:
puts "auto_path=$auto_path"
parray env
immediately before the package require that has the error. That should print out plenty of information. (Pay particular attention to if you are setting the TCL_LIBRARY or TCLLIBPATH environment variables differently.)
Aside from that, it's possible that there is something set in the ~/.tclshrc file, which is only sourced in interactive mode (it happens before you get your prompt). That could cause observable changes. Another option is if the ha package's pkgIndex.tcl script is written to use abbreviated commands, which only work when Tcl is in interactive mode. Errors in the package index definition script will make the code that describes how to actually load/source the package's implementation not register, and could give you the error state you see. If the script is assuming it can use abbreviations, fix it as that's always a bug. Abbreviations are a convenience when using Tcl interactively, and should never be put in proper saved code.
You might want to check whether the list of packages is complete. Use this code for that:
catch {package require NoSuchPackage}; # Force immediate population of the list of packages
puts Packages:\n\t[join [lsort -dictionary [package names]] \n\t]
Again, put this in after any setting of global variables and before the problem package require.
In side tcl script, you can simply do setenv as, setenv AUTOTEST="/auto/isbutest/frt".
if you want to set a variable, use set VARNAME "/auto/isbutest/frt".
if you want to get any environment variable, use $::env(AUTOTEST).
and any variable declared using set command can be accessed using $VARNAME.

Save Global variables BASH

I am new at bash and trying to solve some issues for a code I'm trying to make.
I am at the terminal under my user name and connect to bash
USER$
USER$ bash
bash$
now in the bash I am saving some variables f.e:
i=2
k=2
let p=$k*$i
now I want to use those variables outside the bash function
bash$exit
USER$
but now the variables are not there
I try using export, but it did not really work, could use ur help, tnx
Not possible. You cannot set environment variables in a parent process like this.
Unlike a DOS batch file, a Unix shell script cannot directly affect the environment of its calling shell.
You could consider using the . (dot) or source command to read and execute the script in the context of the calling shell. This means that changes made in the script do affect the environment (in general; you can still run into issues with sub-shells).
The other alternative is to have the script that sets the variables write the values in name=value format into a file which the calling script then reads (with . or source again).
The conventional solution is to add the settings to your .profile or . bashrc -- which you should use depends on your specific needs and your local Bash configuration; my first recommendation would be .profile, but then you have to avoid any bashisms because this file is shared with sh (so, no let, for example).
For more specific needs, put the commands in a file, and source it when you need it. You might also want to create a simple script to update the file with your current values.
# source this file to update $HOME/stuff
cat<<HERE>$HOME/stuff
i='$i'
k='$k'
p='$p'
export i k p
HERE
The syntax here is quite simple, but assumes you don't have values which can contain single quotes or otherwise free-form content. How to safely store arbitrary values which you don't have complete control over is a much more complex discussion; I am providing a simple solution for the basic use case where you merely need to save a few simple scalar values, like numbers.
To keep your variables when you connect to a remote system, look at the documentation for the tool you are using to connect. For example, ssh has configuration options for importing environment variables from the local system when starting a remote session.

Resources