When I source my .cshrc file and run the Tcl script it is working fine:
$ source .cshrc-sample
$ tclsh invoke.tcl
Following is the .cshrc file:
setenv AUTOTEST "/auto/isbutest/frt"
setenv ATS_EASY "$AUTOTEST"
setenv ATS_USER_PATH "$AUTOTEST"
setenv PATH "${AUTOTEST}/bin:${PATH}"
But when I tried setting the environment variable in Tcl itself and run the script,
I get the following error:
$ tclsh invoke.tcl
can't find package ha
while executing
"package require ha"
(file "invoke.tcl" line 8)
My Tcl script - invoke.tcl:
global env
set env(AUTOTEST) "/auto/isbutest/frt"
set env(ATS_EASY) "/auto/isbutest/frt"
set env(ATS_USER_PATH) "/auto/isbutest/frt"
set env(PATH) "$env(PATH):/auto/isbutest/frt/bin:";
package require ha
How can I run the script without sourcing the .cshrc?
The thing is setting environment variable is not possible using scripts, the lifetime of the variable is within the runtime of the script. When I tried printing the PATH variable it shows what is needed, but I don't know why it is not working. Is there any other workaround for this?
There's a few possibilities. The key things to look at are whether there are any other environment variables that you've missed out, whether the Tcl auto_path global variable is correct immediately before the package require, and whether there is anything else going on.
The easiest way from the Tcl side is to add:
puts "auto_path=$auto_path"
parray env
immediately before the package require that has the error. That should print out plenty of information. (Pay particular attention to if you are setting the TCL_LIBRARY or TCLLIBPATH environment variables differently.)
Aside from that, it's possible that there is something set in the ~/.tclshrc file, which is only sourced in interactive mode (it happens before you get your prompt). That could cause observable changes. Another option is if the ha package's pkgIndex.tcl script is written to use abbreviated commands, which only work when Tcl is in interactive mode. Errors in the package index definition script will make the code that describes how to actually load/source the package's implementation not register, and could give you the error state you see. If the script is assuming it can use abbreviations, fix it as that's always a bug. Abbreviations are a convenience when using Tcl interactively, and should never be put in proper saved code.
You might want to check whether the list of packages is complete. Use this code for that:
catch {package require NoSuchPackage}; # Force immediate population of the list of packages
puts Packages:\n\t[join [lsort -dictionary [package names]] \n\t]
Again, put this in after any setting of global variables and before the problem package require.
In side tcl script, you can simply do setenv as, setenv AUTOTEST="/auto/isbutest/frt".
if you want to set a variable, use set VARNAME "/auto/isbutest/frt".
if you want to get any environment variable, use $::env(AUTOTEST).
and any variable declared using set command can be accessed using $VARNAME.
Related
I tried so many key words:
use source
nothing in front of the path
use
# set -a # automatically export all variables
'./../private/some-cred.sh'
# set +a
None of them allow me to import some private creds into the .env file that will be ran by fastlane Dotenv.overload './config/.env.qa'
Current files:
./config/.env.qa
# source './../private/fastlane-cred.sh'
# APPLE_API_KEY_ID=$APPLE_API_KEY_ID_PRIVATE
./private/some-cred.sh
export APPLE_API_KEY_ID_PRIVATE="AAA"
When this setup got loaded into bitrise (which is in ./ios/) via Dotenv.overload './config/.env.qa'
The value will be considered as empty.
Any idea what else I should do, in order to allow me to load some variables from a file into .env properly so its recognized?
NOTE: path is correct. because I had a file ref defined within .env file and it was loaded in the bitrise ENV var correctly. but not for variables.
What is important to know about the "dotenv" paradigm is that the various "dotenv" implementations are mimicking a shell environment instead of actually being a shell environment. As they are run from inside the application, written in a non-shell language - it is far too late at this point to use the shell to setup environment variables. The "dotenv" libraries aren't even actually changing the process's environment - they just use the runtime's API to store data in such a way that other code that uses the runtime's environment access API will see it as if it's coming from the environment.
The fastlane system uses the dotenv gem which uses an internal parser (based on regular expressions) to parse a file that looks like a shell file (containing only variable assignment) but isn't an actual shell file. Everything there that doesn't look like a naive shell variable assignment is ignored - the dotenv file isn't actually a shell script and isn't treated like a shell script, so you can't put shell scripting commands in it and expect it to work.
If you really want to use shell scripting to setup your environment - then use a wrapper shell script to start your environment (or a Makefile) instead of using the application's internal "dotenv" support - which as I explained above, isn't actually a shell script.
In vim I can access my bash environment variables such as $PWD and $PATH. I would like to know how to access my temporary shell variables in vim too.
For example, suppose I was in my terminal and define a variable foo="bar". Then I enter vim and try to access this variable with the following command :!echo $foo, but it does not recognize this variable. From my understanding, vim starts a new shell each time a bash command is invoked and then closes it immediately after. Is there a way to use the same shell in vim that my local variable foo was defined in?
No, you can't interact with the parent shell from a subprocess it spawned (without that shell's active participation, which isn't reasonably/practically available in the scenario at hand) -- but you can export your variables to make them accessible to new shells started in child processes.
Running
set -a
...will make any variable defined going forward be automatically exported to the environment, even without an explicit export command.
Since (unlike the C system() function) vim's system() honors the SHELL environment variable, if SHELL=/bin/bash (or :set shell=/bin/bash has been run in vim), you can also invoke exported functions from vim. That is, if you define the function and export it as follows:
foo() { echo "bar"; }
export -f foo
...then you can invoke it with !foo from inside vim.
Even then, however, this is running in a new, transient shell instance, not the original parent process.
Explanation
Environment variables and shell variables are two entirely different concepts, but as we manipulate them in a similar way in bash, it's easy to get confused.
Whenever a process is created (by fork), it may include an environment, given by its parent at fork-time. The child process may then access and modify its content. How this is done as a user depends on the program :
In vim, you can access an environment variable like this : :echo $foo
In bash, you can access it like this : $ echo "$foo"
In most programming languages, you can access it with a syntax coherent with the rest of the language, such as ENV['foo'] in ruby
On the other hand, a program may allocate memory for any internal use, but notably, it will quite often define and use variables. Once again, this depends on the program :
In vim, you would use the :let command to assign an internal variable
In bash, you would assign a variable with $ foo='bar', and then read it with $ echo "$foo"
In most programming languages, you have a variation of the foo='bar' syntax, sometimes with type declarations, etc
As you can see, bash uses the same syntax to read an environment variable and one of its own private variables, which can lead to some confusion.
When you execute vim from your bash shell, the environment is copied over from the parent process (bash) to the child (vim), but the private memory of bash (including the variables you may have defined) are not.
Thus, accessing them from the child process would require some inter-process communication mechanism, between parent and child. While technically doable, this option is not implemented in bash nor vim.
Solution
In order for your variable to be accessible from vim (or any forked process, for that matter), you need it to be present in the environment of your vim process.
Several options to do that :
$ export foo='bar' : This will mark your variable for export to the environment of subsequently executed commands. That's what you want in most cases.
$ foo='bar' vim : This adds your variable to the environment of this vim command. Very useful for troubleshooting, or for one-liners.
$ set -a : As you can see in bash manpage, this marks every subsequent definitions for export to the environment of subsequent commands. It's essentially equivalent to prepending every subsequent definition by export.
To go further
The question uses the :!echo $foo syntax to display the value of foo, which is yet another usecase. The ! here is actually an escape sequence that allows you to execute a shell command from vim.
However, vim cannot execute anything in the parent shell (the one you executed the vim command in), so it creates a new bash shell in a child process, executes echo in it, and displays the result.
In the current case, the result is mostly the same, but it could easily be misleading in other situations, so it's important to understand what is happening here.
There is another vim syntax, using expand, that allows one to lookup variables : :echo expand("$foo")
It however works entirely differently.
If no internal variable named foo exists, vim will invoke a shell to look it up (similarly to what ! would do).
This options is way slower than an environment lookup, and not recommended for most usecases.
If you want to use a value from your shell on the :substitute command, there's actually a way to do it.
I don't know if it solves your need but here we go.
Let's say we want to substitute Mydir by your PWD:
:s/Mydir/\=expand($PWD)/g
I've set some environment variables in /etc/profile, I can access them from bash, but for some reason I cant get them from Go.
/etc/profile:
...
TEST_ENV=test_me
I can access it from bash:
echo $TEST_ENV
test_me
I can't access this variable from GO
os.Getenv("TEST_ENV") // returns ""
If I list the available environment variables with
os.Environ()
I don't see the variable I'm looking for, but there a few variables that might help:
SHELL=/bin/sh
USER=root
LOGNAME=root
I guess my problem is related to different sessions and shells, so I even tried running
exec.Command("source /etc/profile")
and get the variables after, but it still returns nothing.
Can you give me some tips how to get environment variables if they're set in /etc/profile? I'd prefer getting them from that file, but if necessary, I can put the variables in a different place as well.
When you set an environment variable in bash, by default it isn't exported. Only exported environment variables are passed along to processes created by the shell (i.e., programs that you run). Try export TEST_ENV=test_me.
I am new at bash and trying to solve some issues for a code I'm trying to make.
I am at the terminal under my user name and connect to bash
USER$
USER$ bash
bash$
now in the bash I am saving some variables f.e:
i=2
k=2
let p=$k*$i
now I want to use those variables outside the bash function
bash$exit
USER$
but now the variables are not there
I try using export, but it did not really work, could use ur help, tnx
Not possible. You cannot set environment variables in a parent process like this.
Unlike a DOS batch file, a Unix shell script cannot directly affect the environment of its calling shell.
You could consider using the . (dot) or source command to read and execute the script in the context of the calling shell. This means that changes made in the script do affect the environment (in general; you can still run into issues with sub-shells).
The other alternative is to have the script that sets the variables write the values in name=value format into a file which the calling script then reads (with . or source again).
The conventional solution is to add the settings to your .profile or . bashrc -- which you should use depends on your specific needs and your local Bash configuration; my first recommendation would be .profile, but then you have to avoid any bashisms because this file is shared with sh (so, no let, for example).
For more specific needs, put the commands in a file, and source it when you need it. You might also want to create a simple script to update the file with your current values.
# source this file to update $HOME/stuff
cat<<HERE>$HOME/stuff
i='$i'
k='$k'
p='$p'
export i k p
HERE
The syntax here is quite simple, but assumes you don't have values which can contain single quotes or otherwise free-form content. How to safely store arbitrary values which you don't have complete control over is a much more complex discussion; I am providing a simple solution for the basic use case where you merely need to save a few simple scalar values, like numbers.
To keep your variables when you connect to a remote system, look at the documentation for the tool you are using to connect. For example, ssh has configuration options for importing environment variables from the local system when starting a remote session.
I have a bash script as follows:
rvm use 1.8.7
rvm list
The first line is a function loaded within my .bashrc file that defines some enviroment variables. When executing the second line, those variables have been set to their previous values (the set values have been lost). What am I missing here?
Running on a ubuntu box.
A subshell is being created and the variables are set within it. When the subshell exits, the changes are lost. This often happens when a while loop is in a pipe. Without seeing the function it's impossible to be more specific than that.
when you define environment variables that you want to make available to all subshells you need to prefix it with export like so:
export myvar="some value"
I would check that rvm is doing this properly