maybe some of you openshift/docker pros can help me out with this one. Apologies in advance for my formatting, im on mobile and dont have access to exact error codes right now. I can supply more detailed input/stderr later if needed.
Some details about the environment:
-functioning OC pod running a single postgresql v9.6 container.
-CentOS7 host
-Centos7 local machine
-bash 4.2 shell (both in the container and on my local box)
My goal is to use a one-liner bash command to rsh into a postgresql container, and run the following command to print said containers databases to my local terminal. something like this:
[root#mybox ~]$ oc rsh pod-name /path/to/command/executable/psql -l
result:
rsh cannot find required library, error code 126
The issue I am hitting is that, when executing this one liner, The rsh does not see the target pod’s environment variables. This means it cannot find the supporting libraries that the psql command needs. If i dont supply the full path as shown in my example, it cannot even find the psql command itself.
Annoyingly, running the following one liner prints all of the pods environment variables (including the ones i need for psql) to my local terminal, so they should be accessible somehow.
[root#mybox ~]$ oc rsh pod-name env
Since this is to be executed as part of an automated procedure, the simple, interactive rsh approach (which works, as described below) is not an option.
[root#mybox ~]$ oc rsh pod-name
sh-4.2$ psql -l
(pod happily prints the database info in the remote terminal)
I have tried executing the script which defines the psql environment variable and then chaining the desired command, but i get permission denied when trying to execute the env script.
[root#mybox ~]$ oc rsh pod-name /path/to/env/define/script && psql -l
permission denied, rsh error code 127
Related
This question already has answers here:
Pass commands as input to another command (su, ssh, sh, etc)
(3 answers)
Closed 3 years ago.
I want to create s sh script but it stops when "docker exec -it cli bash" is executed and do not go to the next line. How to run the other commands on root?
root#ee3abae377df:/opt/gopath/src/github.com/hyperledger/fabric/peer#
Stops here and i am not able do execute the next command
docker exec -it cli bash
export CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/Admin#org1.example.com/msp
export CORE_PEER_ADDRESS=peer0.org1.example.com:7051
export CORE_PEER_LOCALMSPID="Org1MSP"
export CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt
export CHANNEL_NAME=mychannel
docker exec -it creates an interactive docker container. It starts a new shell in your current terminal. This blocks the rest of the commands from running until you kill or exit the container. The rest of the commands you have will in fact be run, once you exit the container.
I assume this is not desired. You should look into creating an entrypoint in a custom Dockerfile for your docker container where you execute the remainder of the commands in your script.
If you haven't created a Dockerfile before, the getting started guide from Docker is a pretty good intro to everything docker-related.
Executing commands using CLI on any peer
As per the description you have given, cli is pointing to
peer0.org1.example.com:7051. (Please check your docker-compose file
and there would be one service with container name as "cli" and image
as hyperledger/fabric-tools)
When you are executing "docker exec -it cli bash", you are entering
into container of peer0.org1. It provides an interactive shell to
you.
Let us consider we want to install chaincode using cli on Pee1 Org1, create one test.sh file and write the following cammand inside test.sh file
docker exec -e "CORE_PEER_LOCALMSPID=Org1MSP" -e "CORE_PEER_ADDRESS=peer1.org1.example.com:7051" -e "CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/Admin#org1.example.com/msp" cli peer chaincode install -n cc_name -v cc_version -p cc_path
Here we are passing environment varibles CORE_PEER_ADDRESS and CORE_PEER_MSPCONFIGPATH.
Then just execute test.sh file, the chaincode will be installed on peer1 provided container is already running and environment varibles are correct. (please provide correct chaincode path, name, and version)
You should leverage the option given by #jeremysprofile. But still, if you want to achieve it through shell script then you might want to write a expect script along with shell script or might merge both the scripts in one. It should look something like this :
#!/usr/bin/expect -f
expect "*root*" {
send -- "<your command>\r"
send -- "exit \r"
}
Expect is a program that "talks" to other interactive programs according to a script. Following the script, Expect knows what can be expected from a program and what the correct response should be.
Here is the Linux man Page.
I'm trying to execute a script on the master node of AWS EMR cluster. The intention is to create a new conda env and link it to jupyter. I'm following this doc from AWS. Problem is, whatever be the content of the script, I'm getting the same error: bash: /home/hadoop/scripts/bootstrap.sh: No such file or directory while executing sudo docker exec jupyterhub bash /home/hadoop/scripts/bootstrap.sh. I've made sure the sh file is in the correct location.
But if I copy the bootstrap.sh file inside the container, and then run the same docker exec cmd, it's working fine. What am I missing here? I've tried with a simple script with the following entries, but it throws the same error:
#!/bin/bash
echo "Hello"
The doc clearly says:
Kernels are installed within the Docker container. The easiest way to
accomplish this is to create a bash script with installation commands,
save it to the master node, and then use the sudo docker exec
jupyterhub script_name command to run the script within the jupyterhub
container.
The docker exec command runs a command within the container's namespaces. One of those namespaces is the filesystem. So unless the command is part of the image, written into the container directly, or you have mounted a host volume to map a host directory into the container, you won't be able to execute it. A host volume could look like:
docker run -v /host/scripts:/container/scripts -n your_container $your_image
docker exec -it your_container /container/scripts/test.sh
That host volume could be the same path on both the host and the container.
If it is a shell script, you could use I/O redirection, e.g.:
docker exec -i $container_id /bin/bash <local_script.sh
but be aware that you cannot do interactive stuff this way since the script content has replaced your terminal as stdin. This works because the shell inside the container is just processing commands from stdin.
Other than those scenarios, I don't know what to tell you other than the documentation from AWS appears to be wrong.
I'm running the docker container locally to troubleshoot its state. I don't always want to execute the RUN/ENTRYPOINT, I often want to get into the running container, do some things, and then run the RUN/ENTRYPOINT.
It would be super convenient to have the RUN/ENTRYPOINT available after I docker run bash by just pressing the up key. So I thought it would be nice if I could modify the history with history -s ... in the Dockerfile. That way, as soon as I docker run bash, I can just press up and have the RUN/ENTRYPOINT available.
When I put this in the docker file, I got this error:
/bin/sh: 1: history: not found
Is there a way to set the bash history in a Dockerfile?
You get the error because RUN commands run in /bin/sh, which has no history command available.
To make this work, you need to run an interactive bash shell during the build, so it will store your history entry.
RUN bash -ic 'history -s foobar'
That should leave behind a history file with foobar as its most recent (and probably only) entry.
You will see an error during build about ioctl... that is normal, because interactive bash expects to find a terminal, and there won't be one. But it should still work fine.
bash: cannot set terminal process group (1): Inappropriate ioctl for device
bash: no job control in this shell
Note that this will be stored for the user you run the command as. If your image switches to a non-root user with the USER statement, you should put this after the USER line so it is stored in the user that your image runs as.
In the case when I first ssh to the server and then run command, it executes successfully
root#chef:~# chef-solo -v
Chef: 11.10.0
But when I try to run it like this
ssh root#188.xxx.xxx.xxx -t -C "chef-solo -c /var/chef/solo.rb"
I receive an error:
bash: chef-solo: command not found
Why is this happening, and how can I solve this issue ?
It is still matter of $PATH and ssh - not chef-solo. Interactive and non-interactive sessions not necessarily have same value for the $PATH variable. Same ssh problem is described here on stackoverflow. You may also check GNU bash manual to have deeper insight of (non-)interactive and (non-)login shells. To shorten, solution would be one of the following:
Run chef-solo using absolute path. Here's how your command might look like:
ssh root#188.xxx.xxx.xxx -t -C "/usr/local/ruby/bin/chef-solo -c /var/chef/solo.rb"
Tune the .bash configuration files to load same $PATH variable for both interactive and non-interactive shells.
Note: To find out what's the absolute path, login to the machine via ssh and run which chef-solo (Don't know how experienced you are with linux. Sorry if I'm underestimating your knowledge)
So I'm trying to do something that involves running sbt over an SSH command, and this is what I'm trying:
ssh my_username#<server ip> "cd <project folder>; sbt 'run-main Foo' "
When I do that however, I get an error message: bash: sbt: command not found
Then I go SSH into the server myself, cd to the project folder, and run sbt 'run-main Foo' and everything works nicely. I have checked to make sure sbt is on the $PATH variable on the remote server via ssh my_username#<server ip> "echo $PATH" and it shows the correct value.
I feel like this is a simple fix, but cannot figure it out... help?
Thanks!
-kstruct
When you log in, bash is run as an interactive shell. When you run commands directly through ssh, bash is run as a non-interactive shell, and therefore different initialization files are sourced (see the bash manual pages for which exactly). There are a number of ways to fix this, e.g.:
Use the full path to sbt when calling it directly through ssh
Edit .bashrc and add the missing directories to the PATH environment variable
Note that your test ssh my_username#<server ip> "echo $PATH" actually prints PATH on your client, not your server, because of the double quotes. Use ssh my_username#<server ip> 'echo $PATH' or ssh my_username#<server ip> env to print PATH from the server's environment. When checking using env, you will see that PS1 is only set in interactive shells.