Makefile: set the environment variables output by a command - shell

I am trying to write a make target, where it sets the env. variables obtained by a shell command.
gcloud beta emulators datastore env-init command on the terminal output few exports statement like (doesn't set, but just echoes / prints).
export DATASTORE_EMULATOR_HOST=localhost:8432
export DATASTORE_PROJECT_ID=my-project-id
Normally, I have to copy and paste these lines into the terminal to set the variables.
Is it possible to make these printed export statement to execute so they will be set on the shell. I tried like, but didn't work.
target:
#export JAVA_HOME=$(JAVA_HOME); \
$(shell gcloud beta emulators datastore env-init); \
go run src/main.go
it prints out the script output if I do like,
target:
export JAVA_HOME=$(JAVA_HOME); \
gcloud beta emulators datastore env-init \
go run src/main.go
Similar to the JAVA_HOME, how can I also source output of gcloud beta emulators datastore env-init command (which are 4 lines of export commands) so they are set in the environment.
so I want something like, in effect,
target:
export JAVA_HOME=$(JAVA_HOME); \
export DATASTORE_EMULATOR_HOST=localhost:8432 \
export DATASTORE_PROJECT_ID=my-project-id \
go run src/main.go
thanks.
bsr

The output printed by make does not actually affect the environment contents of subprocesses.
If the variable assignments do not appear to have any effect, that's because make runs each line in a separate subshell, each with fresh environment. You need to join the commands together like this:
target:
export JAVA_HOME=$(JAVA_HOME); \
eval "$$(gcloud beta emulators datastore env-init)"; \
go run src/main.go
The eval is necessary so that the output of the gcloud command is evaluated in the current shell, and not a subshell. Also note the semicolon at the end of the second line of the recipe.

Related

Run bash then eval command on Docker container startup

I’m setting up a docker container to just be a simple environment for Ocaml, since I don’t wanna have to manage two OPAM tool chains on two computers. (Windows desktop, Linux laptop) My goal is to have the container load in to a bash command prompt on docker-compose run with ocaml ready to go, and to do this I need to enter in to bash and then run eval $(opam env) on startup. This is my current docker file:
FROM ocaml/opam:alpine-3.12
# Create folder and assign owner
USER root
RUN mkdir /code
WORKDIR /code
RUN chown opam:opam /code
USER opam
# Install ocaml
RUN opam init
RUN opam switch create 4.11.1
RUN opam install dune
# bash env
CMD [ "/bin/bash" ]
ENTRYPOINT [ "eval", "\$(opam env)" ]
Building and trying to run this gives me the error:
sh: $(opam env): unknown operand
ERROR: 2
I tried making a run.sh script but that ran into some chmod/permission issues that are probably harder to debug than this. What do I do to open this container in bash and then run the eval $(opam env) command? I don’t want to do this with command line arguments, I’d like to do this all in a dockerfile or docker-compose file
The trick is to use opam exec1 as the entry point, e.g.,
ENTRYPOINT ["opam", "exec", "--"]
Then you can either run directly a command from the installed switch or just start an interactive shell with run -it --rm <cont> sh and you will have the switch fully activated, e.g.,
$ docker run -it --rm binaryanalysisplatform/bap:latest sh
$ which ocaml
/home/opam/.opam/4.09/bin/ocaml
As an aside, since we're talking about docker and OCaml, let me share some more tricks. First of all, you can look into our collection of dockerfiles in BAP for some inspiration. And another important trick that I would like to share is using multistage builds to shrink the size of the image, here's an example Dockerfile. In our case, it gives us a reduction from 7.5 Gb to only 750 Mb, while still preserving the ability to run and build OCaml programs.
And another side note :) You also should run your installation in a single RUN entry, otherwise your layers will eventually diverge and you will get weird missing packages errors. Basically, here's the Dockerfile that you're looking for,
FROM ocaml/opam2:alpine
WORKDIR /home/opam
RUN opam switch 4.11.1 \
&& eval "$(opam env)" \
&& opam remote set-url default https://opam.ocaml.org \
&& opam update \
&& opam install dune \
&& opam clean -acrs
ENTRYPOINT ["opam", "exec", "--"]
1)Or opam config exec, i.e., ENTRYPOINT ["opam", "config", "exec", "--"] for the older versions of opam.
There's no way to tell Docker to do something after the main container process has started, or to send input to the main container process.
What you can do is to write a wrapper script that does some initial setup and then runs whatever the main container process is. Since that eval command will just set environment variables, those will carry through to the main shell.
#!/bin/sh
# entrypoint.sh
# Set up the version-manager environment
eval $(opam env)
# Run the main container command
exec "$#"
In the Dockerfile, make this script be the ENTRYPOINT:
COPY entrypoint.sh /usr/local/bin
ENTRYPOINT ["/usr/local/bin/entrypoint.sh"]
CMD ["/bin/bash"]
It also might work to put this setup in a shell dotfile, and run bash -l as the main container command to force it to read dotfiles. However, the $HOME directory isn't usually well-defined in Docker, so you might need to set that variable. If you expand this setup to run a full application, the entrypoint-wrapper approach will will there too, but that sequence probably won't read shell dotfiles at all.
What you show looks like an extremely straightforward installation sequence and I might not change it, but be aware that there are complexities around using version managers in Docker. In particular every Dockerfile RUN command has a new shell environment and the eval command won't "stick". I'd ordinarily suggest picking a specific version of the toolchain and directly installing it, maybe in /usr/local, without a version manager, but that approach will be much more complex than what you have currently. For more mainstream languages you can also usually use e.g. a node:16.13 prebuilt image.
What's with the error you're getting? For ENTRYPOINT and CMD (and also RUN) Docker has two forms. If something is a JSON array then Docker runs the command as a sequence of words, with one word in the array translating to one word in the command, and no additional interpretation or escaping. If it isn't a JSON array – even if it's mostly a JSON array, but has a typo – Docker will interpret it as a shell command and run it with sh -c. Docker applies this rule separately to the ENTRYPOINT and CMD, and then combines them together into a single command.
In particular in your ENTRYPOINT line, RFC 8259 §7 defines the valid character escapes in JSON, so \n is a newline and so on, but \$ is not one of those. That makes the embedded string invalid, and therefore the ENTRYPOINT line isn't valid, and Docker runs it via a shell. The single main container command is then
sh -c '[ "eval", "\$(opam env)" ]' '/bin/bash'
which runs the shell command [, as in if [ "$1" = yes ]; then ...; fi. That command doesn't understand the $(...) string as an argument, which is the error you're getting.
The JSON array already has escaped the things that need to be escaped, so it looks like you could get around this immediate error by removing the erroneous backslash
ENTRYPOINT ["eval", "$(opam env)"] # won't actually work
Docker will run this as-is, combining it with the CMD, and you get
'eval' '$(opam env)' '/bin/bash'
But eval isn't a "real" command – there is no /bin/eval binary – and Docker will pass on the literal string $(opam env) without interpreting it at all. That's also not what you want.
In principle it's possible to do this without writing a script, but you lose a lot of flexibility. For example, consider
# no ENTRYPOINT; shell-form CMD
CMD eval $(opam env) && exec /bin/bash
Again, though, if you replace this CMD with anything else you won't have done the initial setup step.

settings environment in makefile

I am trying to export an env variable in make to use in on the next lines. I am doing as suggested here Setting environment variables in a makefile.
eks-apps:
export KUBECONFIG=$(CURDIR)/terraform/kubernetes/cluster/$(shell ls terraform/kubernetes/cluster/ | grep kubeconfig)
kubectl get all
But its not using that kubeconfig in the kubectl command. What am I missing?
Every line in a recipe will be executed in a new shell. As this you have to use a single shell for all your commands:
eks-apps:
( \
export KUBECONFIG=$(CURDIR)/terraform/kubernetes/cluster/$(shell ls terraform/kubernetes/cluster/ | grep kubeconfig); \
kubectl get all \
)
From the answer you are linked to:
Please note: this implies that setting shell variables and invoking shell commands such as cd that set a context local to each process will not affect the following lines in the recipe.2 If you want to use cd to affect the next statement, put both statements in a single recipe line

Problems in running a script.sh in Ubuntu app Windows10

I'm trying to learn how to code. I have to say that I'm using the Ubuntu app on the Windows system, so I don't know if my problems are related to this system.
I established these variables in the terminal
FOLDER="/mnt/c/Users/franc/Desktop/nuova"
species=mm10
fragmentsize=200
window=200
gap=200
output="/mnt/c/Users/franc/Desktop/nuova/sicer"
and then I wrote this loop
#!/bin/bash
for fq in $FOLDER/*.bam
do
bedtools bamtobed -i "$fq" > "${fq%.bam}.bed"
sicer -t ${fq%.bam}.bed \
-s $species \
-f $fragmentsize \
-w $window \
-g $gap \
-o $output
echo "DONE"
done
Basically I want the files in the FOLDER to be transformed in "${fq%.bam}.bed" and then I want to run sicer tool on these new files.
If I copy and paste these commands, on the terminal, everything goes fine but if I save the loop as script.sh and I try to run the script I obtain different errors.
Of course, I made the script executable with chmod +x and I also changed the syntax of the script using awk '{ sub("\r$", ""); print }' myscript.sh > myscript1.sh since I edited it in Windows(otherwise ubuntu fails to open it).
But when I launch the script containing the loop, it says that it is not able to open the files in the FOLDER (Failed to open BAM file /*.bam or BAD permission denied). I tried both to open it just giving the command ./myscript1.sh or also using sudo ./myscript1.sh.
What I'm missing? I have in some way link the variable I establish in the terminal to a new variable in the script saved?
thanks
Francesca
You need to export the variables so that they'll be inherited by the shell process running the script.
export FOLDER="/mnt/c/Users/franc/Desktop/nuova"
export species=mm10
export fragmentsize=200
export window=200
export gap=200
export output="/mnt/c/Users/franc/Desktop/nuova/sicer"

How to run Robot Framework's `robot` command from a shell/bash script on Windows?

As a user I want to execute Robot Framework's robot command with some command line options. I put everything in a script to avoid retyping the long command each time - see example below. On Linux an Mac OS I can execute this script from any terminal emulator, i.e.
# Linux
. run_local_tests.sh
# Mac OS
./run_local_tests.sh
On Windows an application (VSCode Editor) associated with .sh file type is opened instead of executing the robot command or an error like robot: command not found is returned
# Windows
.\run_local_tests.sh
# OR
run_local_tests.sh
# OR
bash run_local_tests.sh
shell script - filename: run_local_tests.sh
#!/bin/bash
# Set desired loglevel: NONE (less details), INFO, DEBUG, TRACE (most details)
export LOG_LEVEL=TRACE
# RUN CONTRIBUTION SERVICE TESTS
robot -i CONTRIBUTION -e circleci \
--outputdir results \
--log NONE \
--report NONE \
--output XML/CONTRIBUTION.xml \
--noncritical not-ready \
--flattenkeywords for \
--flattenkeywords foritem \
--flattenkeywords name:_resources.* \
--loglevel $LOG_LEVEL \
--name CONTRI \
robot/CONTRIBUTION_TESTS/
Renaming the script from .sh to .bat doen't help :(
entering bash, then activating venv and calling the script doesn't work
What other options are there (without installing additional tools like Cygwin etc.)?
I'm actually trying to answer the same question in the opposite direction (how to trigger/run them on my machine as .sh). Looks like we may help each other out. 8)
I believe this is what you're looking for:
Your file would be run_local_tests.bat
Contents:
#echo off
cd C:\path\to\robot\project
call robot -d relative/path/to/test/output/dir relative/path/to/run_local_tests.bat
Of course you can use any other valid robot cli syntax in the call also. You may have to make it executable too. I'm not sure.

Running Bash scripts in iPython

I wrote a bash script, run.sh which has a python command with multiple options -
python train.py --lr 0.01 \
--momentum 0.5 \
--num_hidden 3 \
--sizes 100,100,100 \
--activation sigmoid \
--loss sq \
--opt adam \
--batch_size 20 \
--anneal true
I tried running this command in iPython -
!./run.sh
However, in iPython I'm not able to access the variables of the python script train.py. Is there some way to run the bash script in iPython so that I can access the variables? I don't want to copy paste the above command from the bash script each and every time.
I'm currently using iPython 5.1.0 on macOS Sierra.
The python process that runs your script train.py and the python process you're using at the ipython command line are two separate processes. It makes sense that one doesn't know about the variables of the other. There is probably some fancy way to connect the two but I suspect from the way you described the problem that it's not worth the work.
Here's an easier way to get access: you could replace python train.py in your script with python -i train.py. This way you will go into interactive mode in the process that runs your script after it is done, and anything defined at the main level will be accessible. You could insert a call to pdb.set_trace() in your script to stop at an arbitrary point.

Resources