Unable to set the PATH variable for jdk - bash

I have installed sun-java in archlinux kde by first building the package and then installing it. This is the way the environment variables are set in my machine:
file: /etc/profile
# /etc/profile
#Set our umask
umask 022
# Set our default path
PATH="/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin"
export PATH
# Load profiles from /etc/profile.d
if test -d /etc/profile.d/; then
for profile in /etc/profile.d/*.sh; do
test -r "$profile" && . "$profile"
done
unset profile
fi
# Source global bash config
if test "$PS1" && test "$BASH" && test -r /etc/bash.bashrc; then
. /etc/bash.bashrc
fi
# Termcap is outdated, old, and crusty, kill it.
unset TERMCAP
# Man is much better than us at figuring this out
unset MANPATH
and file: /etc/profile.d/jdk.sh
export J2SDKDIR=/opt/java
export PATH=$PATH:/opt/java/bin:/opt/java/db/bin
export JAVA_HOME=/opt/java
export DERBY_HOME=/opt/java/db
what I understand from this is, jdk path should be set in the path environment variable but its not. But the attribute $JAVA_HOME is set correctly. Any reasons why am I facing this problem?

/etc/profile and /etc/profile.d are processed only for login shells, so unless you're doing ssh into the machine where java is installed you won't get those variables.
To have them locally (e.g. when you open an xterm on a workstation) put them in the file /etc/bash.bashrc.
Hope this helps.

Actually, it was a silly mistake on my part. I am using zsh shell. So I was required to put:
export PATH=$PATH:$JAVA_HOME/bin
in .zshrc file instead of .bashrc.

Related

Persist OpenStack aliases and macros

When using the OpenStack client in the openstack "shell" mode, I miss aliases for common tasks a lot. I see that you can add aliases or macros with "alias add ..." or "macro add ...". However the aliases are not persisted between shell sessions.
Is there some file where I can configure those ?
You could set persisted alias in /etc/profile or ~/.bashrc and others file in /etc/profile.d/.
/etc/profile: set for all users
~/.bashrc: set for current user as ~ is the user $HOME directory
/etc/profile.d/: where directory will be imported as /etc/profile shows:
if [ -d /etc/profile.d ]; then
for i in /etc/profile.d/*.sh; do
if [ -r $i ]; then
. $i
fi
done
unset i
fi
All this method take effect when you re login after update these files.

Calling An Anaconda Environment from MATLAB: Conda Command Not Found

I want to call a Python script I created in its own Anaconda environment and wanted to call the script from Matlab 2020a. However, when I try to activate the environment from Matlab, I get an error message:
system('conda activate *name_of_environment*')
/bin/bash: conda: command not found
I installed the newest version of anaconda3 (2020.02) on a Ubuntu 18.04 machine and, as recommended, didn't add conda to bashrc but added the conda.sh directory instead as recommended here:
# >>> conda initialize >>>
# !! Contents within this block are managed by 'conda init' !!
__conda_setup="$('/home/michael/anaconda3/bin/conda' 'shell.bash' 'hook' 2> /dev/null)"
if [ $? -eq 0 ]; then
eval "$__conda_setup"
else
if [ -f "/home/michael/anaconda3/etc/profile.d/conda.sh" ]; then
. "/home/michael/anaconda3/etc/profile.d/conda.sh"
else
export PATH="/home/michael/anaconda3/bin:$PATH"
fi
fi
unset __conda_setup
# <<< conda initialize <<<
# export PATH="/home/michael/anaconda3/bin:$PATH" # commented out by conda initialize
#Enable conda to be called from bash
source /home/michael/anaconda3/etc/profile.d
However, I can't find an explanation how to run conda from Matlab otherwise. Am I missing something?
Thanks a bunch, and best,
Michael
Let me elaborate my comment it in an answer.
Binaries are found trough the PATH environment variables. The location of conda is not in that variable. Therefore you should either add it to your PATH variables (or un-comment it that script at your notification).
Example:
$ export PATH="$PATH:/home/michael/anaconda3/bin/"
$ ./yourscript.sh
But it also can be that the PATH variable isn't copied through system(), which I guess executes the script in a new shell. In this case, you should execute it as:
system('/home/michael/anaconda3/bin/conda activate *name_of_environment*')
I know it is too late, but maybe the best way to run a python script using a conda environment is to call the python executable associated with that environment directly:
system('~/anaconda3/envs/<name_of_environment>/bin/python your_script.py')

Add conda environment info to terminal prompt

(I'm using anaconda on a MacBook)
By default conda adds the environment info to the comand prompt as follows:
$ source activate my_env
(my_env) $ source deactivate
$
This can be switched off and on using
conda config --set changeps1 (true|false)
Since my terminal prompt is already customised I'd like to add the env info in a different way, but don't know how to exactly.
Right now I'm using the two commands sacand dac in my .bash_profile file to activate and deactivate envs and therefore did this amateurish attempt adding env_var:
env_var=""
#activate env (default env = my_env)
sac() {
if [ -z $1 ];
then
ENV="my_env"
else
ENV="${1}"
fi
source activate ${ENV}
env_var="${ENV}"
}
#deactivate env
dac() {
source deactivate
env_var=""
}
env_info() {
if [[ ${env_var} == "" ]]
then
echo ""
else
echo "in ${env_var}"
fi
}
PS1="\u "
PS1+="$(env_info) \$";
Which is not working (my bash knowledge is only rudimentary sorry...).
env_info always stays "" no matter wether I call sacor dacin the terminal or not.
Question1: Why is the code not working?
Question2: Or is there maybe another way to get the current env-info in a - for this purpose - useful format?
conda info --envs returns to much info...
The method suggested in the comment from darthbith works very well. The variable $CONDA_DEFAULT_ENV is exactly what I was looking for:
>>> source activate myEnv
>>> echo $CONDA_DEFAULT_ENV
myEnv
To add to the answer by A.Wenn, this is what I added to my custom prompt
PS1=""
# Add conda environment to prompt
if [ ! -z "$CONDA_DEFAULT_ENV" ]
then
PS1+="($CONDA_DEFAULT_ENV) "
fi

Inject Environment variables into a Jenkins build process with a shell script

The starting situation
I have a Jenkins build Project where I'm doing almost everything by calling my build script (./jenkins.sh). I'm building a Cordova Project, which is dependent on certain versions of Node and Xcode. I'm running the builds on Macs with the latest MacOS Sierra.
So far I'm setting the environment variables in the Jenkins Build with the EnvInject Plugin(https://wiki.jenkins-ci.org/display/JENKINS/EnvInject+Plugin):
The Goal
I want to have the environment variables also set by the build script instead of in the Jenkins Build. This way the environment variables are also in version control and I don't have to touch the Jenkins Build itself.
Basically I need to rebuild the logic of the EnvInject Plugin with bash.
What I've tried #1
Within my jenkins.sh build script I've set the environment variables with export
jenkins.sh:
#!/bin/bash -ve
nodeVersion=7.7.8
xcodeVersion=8.3.1
androidSDKVersion=21.1.2
export DEVELOPER_DIR=/Applications/Xcode_${xcodeVersion}.app/Contents/Developer
export ANDROID_HOME=/Applications/adt/sdk
export PATH=/usr/local/Cellar/node/${nodeVersion}/bin:/usr/bin:/bin:/usr/sbin:/sbin:/opt/local/bin:/usr/local/bin:/Applications/adt/sdk/tools:/usr/local/bin/:/Applications/adt/sdk/build-tools/${androidSDKVersion}:$PATH
# print info
echo ""
echo "Building with environment Variables"
echo ""
echo " DEVELOPER_DIR: $DEVELOPER_DIR"
echo " ANDROID_HOME: $ANDROID_HOME"
echo " PATH: $PATH"
echo " node: $(node -v)"
echo ""
This yields:
Building with environment Variables
DEVELOPER_DIR: /Applications/Xcode_8.3.1.app/Contents/Developer
ANDROID_HOME: /Applications/adt/sdk
PATH: /usr/local/Cellar/node/7.7.8/bin:/usr/bin:/bin:/usr/sbin:/sbin:/opt/local/bin:/usr/local/bin:/Applications/adt/sdk/tools:/usr/local/bin/:/Applications/adt/sdk/build-tools/21.1.2:/Users/mles/.fastlane/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/TeX/texbin
node -v
node: v0.10.48
PATH, DEVELOPER_DIR, ANDROID_HOME seems to be set correctly, however it is still using the system version of node v0.10.48 instead of v7.7.8 as set in PATH.
What I've tried #2
I've sourced the variables:
jenkins.sh:
#!/bin/bash -ve
source config.sh
# print info
echo ""
echo "Building with environment Variables"
echo ""
echo " DEVELOPER_DIR: $DEVELOPER_DIR"
echo " ANDROID_HOME: $ANDROID_HOME"
echo " PATH: $PATH"
echo " node: $(node -v)"
echo ""
config.sh
#!/bin/bash -ve
# environment variables
nodeVersion=7.7.8
xcodeVersion=8.3.1
androidSDKVersion=21.1.2
export DEVELOPER_DIR=/Applications/Xcode_${xcodeVersion}.app/Contents/Developer
export ANDROID_HOME=/Applications/adt/sdk
export PATH=/usr/local/Cellar/node/${nodeVersion}/bin:/usr/bin:/bin:/usr/sbin:/sbin:/opt/local/bin:/usr/local/bin:/Applications/adt/sdk/tools:/usr/local/bin/:/Applications/adt/sdk/build-tools/${androidSDKVersion}:$PATH
The result was the same as in What I've tried #1: Still using system node v0.10.48 instead of node v7.7.8
The question
How can I set the PATH, DEVELOPER_DIR, ANDROID_HOME environment variables properly to be used only within the build script?
#tripleee
Above I'm determining node by calling node: $(node -v). In the build script I'm running gulp which triggers Ionic / Apache Cordova. Do the brackets around node -v start a subshell which has it's own environment variables?
#Jacob
We have used nvm before, but we want to have less dependencies. Using nvm requires to install nvm on all build machines. We have a standard of installing node with brew. That's why I'm using /usr/local/Cellar/node/${nodeVersion} as path to node.
#Christopher Stobie
env:
jenkins#jenkins:~$ env
MANPATH=/Users/jenkins/.nvm/versions/node/v6.4.0/share/man:/usr/local/share/man:/usr/share/man:/Users/jenkins/.rvm/man:/Applications/Xcode_7.2.app/Contents/Developer/usr/share/man:/Applications/Xcode_7.2.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/share/man
rvm_bin_path=/Users/jenkins/.rvm/bin
NVM_CD_FLAGS=
TERM=xterm-256color
SHELL=/bin/bash
TMPDIR=/var/folders/t0/h77w7t2s1fx5mdnsp8b5s6y00000gn/T/
SSH_CLIENT=**.**.*.** ***** **
NVM_PATH=/Users/jenkins/.nvm/versions/node/v6.4.0/lib/node
SSH_TTY=/dev/ttys000
LC_ALL=en_US.UTF-8
NVM_DIR=/Users/jenkins/.nvm
rvm_stored_umask=0022
USER=jenkins
_system_type=Darwin
rvm_path=/Users/jenkins/.rvm
rvm_prefix=/Users/jenkins
MAIL=/var/mail/jenkins
PATH=/Users/jenkins/.nvm/versions/node/v6.4.0/bin:/Users/jenkins/.fastlane/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/.rvm/bin:/Users/jenkins/tools/oclint/bin:/Applications/adt/sdk/tools:/Applications/adt/sdk/platform-tools:/Applications/adt/sdk/build-tools/android-4.4:/Users/jenkins/.rvm/bin
NVM_NODEJS_ORG_MIRROR=https://nodejs.org/dist
rvm_loaded_flag=1
PWD=/Users/jenkins
LANG=en_US.UTF-8
_system_arch=x86_64
_system_version=10.12
rvm_version=1.26.10 (latest)
SHLVL=1
HOME=/Users/jenkins
LS_OPTIONS=--human --color=always
LOGNAME=jenkins
SSH_CONNECTION=**.**.*.** ***** **.**.*.** **
NVM_BIN=/Users/jenkins/.nvm/versions/node/v6.4.0/bin
NVM_IOJS_ORG_MIRROR=https://iojs.org/dist
rvm_user_install_flag=1
_system_name=OSX
_=/usr/bin/env
alias:
jenkins#jenkins:~$ alias
alias l='ls -lAh'
alias rvm-restart='rvm_reload_flag=1 source '\''/Users/jenkins/.rvm/scripts/rvm'\'''
This doesnt look like an environment variable issue. It looks like a permissions issue. The user executing the script is either:
not able to read the /usr/local/Cellar/node/7.7.8/bin directory, or
not able to read the node executable from that directory, or
not able to execute the node executable from that directory
In order to test, become that user on the machine and execute the node command against the full path:
/usr/local/Cellar/node/7.7.8/bin/node -v
or, if you need to, change the script to avoid using PATH lookups (Im suggesting this for diagnosis only, not as a solution):
echo " node: $(/usr/local/Cellar/node/7.7.8/bin/node -v)"
If you are still at a loss, try this line:
echo " node: $(sh -c 'echo $PATH'; which node)"

set docker-machine variables using a bash script

I have a script like so:
#!/usr/bin/env bash
eval $(docker-machine env default)
The goal is to automate the setting of variables like
export DOCKER_TLS_VERIFY
export DOCKER_HOST
export DOCKER_CERT_PATH
export DOCKER_MACHINE_NAME
But when I check afterwards, the variables are not set. This is not the case if I run each export command manually. What am I doing wrong?
export makes variables available only to the active shell session. If you want them to persist through sessions, you must add them to your bash profile:
docker-machine env default >> ~/.bash_profile
This way, the variables will be available in all future shell sessions. Make sure to restart the shell after executing the command.
If you want the environment set in your current shell you need to "source" the script rather than run it.
When you run a script, the variables will be set in the child bash processes environment and will not exist once that script/process dies.
$ ./machine.sh
DOCKER_HOST is tcp://192.168.99.100:2376
$ echo "[$DOCKER_HOST]"
[]
When you source a script, the variables will be set in your current environment
$ . machine.sh
DOCKER_HOST is tcp://192.168.99.100:2376
$ echo "[$DOCKER_HOST]"
[tcp://192.168.99.100:2376]

Resources