Launch bash subshell and run commands in a script - bash

I want to write a shell script that does the following:
Activate pipenv virtual environment
Runs mkdocs serve which starts a local dev server for my mkdocs documentation
If I do the naïve thing and put this in my script:
cd <my-docs-directory>
pipenv shell
mkdocs serve
it fails because pipenv shell "launches a subshell in the virtual environment". I need to pass the mkdocs serve command into the virtual shell (and preferably land in that same shell after running the script ).
Thanks in advance!
Answer
Philippe's answer works. Here's why.
pipenv run bash -c 'mkdocs serve ; exec bash --norc'
Pipenv allows you to run a command in the virtual environment without launching a shell:
$ pipenv run <insert command here>
bash -c <insert command here> allows you to pass a command to bash to execute
$ bash -c "echo hello"
hello
exec serves to replace current shell process with a command, so that parent goes a way and child owns pid. Here's a related question on AskUbuntu.

You can use this command :
pipenv run bash -c 'mkdocs serve ; exec bash --norc'

Related

How to run command after source env shell inside bash script

I am trying to run a command after setting up an environment. This command runs a python script which depends on the environment.
I have the following code:
#!/bin/bash
source ~/some/linux/env/shell
python test.py
However, the "python test.py" only runs after I exit the env shell.
I want to be able to run the "python test.py" inside this new shell env.
Firstly adding python running and interpreting directory.
#!/usr/bin/env python
And you have to give a code execution authority. You can give a permission below
chmod a+x test.py
Now you can run it from the command line.
./test.py

How to run a sh script with CLI container using the docker exec command? [duplicate]

This question already has answers here:
Pass commands as input to another command (su, ssh, sh, etc)
(3 answers)
Closed 3 years ago.
I want to create s sh script but it stops when "docker exec -it cli bash" is executed and do not go to the next line. How to run the other commands on root?
root#ee3abae377df:/opt/gopath/src/github.com/hyperledger/fabric/peer#
Stops here and i am not able do execute the next command
docker exec -it cli bash
export CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/Admin#org1.example.com/msp
export CORE_PEER_ADDRESS=peer0.org1.example.com:7051
export CORE_PEER_LOCALMSPID="Org1MSP"
export CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt
export CHANNEL_NAME=mychannel
docker exec -it creates an interactive docker container. It starts a new shell in your current terminal. This blocks the rest of the commands from running until you kill or exit the container. The rest of the commands you have will in fact be run, once you exit the container.
I assume this is not desired. You should look into creating an entrypoint in a custom Dockerfile for your docker container where you execute the remainder of the commands in your script.
If you haven't created a Dockerfile before, the getting started guide from Docker is a pretty good intro to everything docker-related.
Executing commands using CLI on any peer
As per the description you have given, cli is pointing to
peer0.org1.example.com:7051. (Please check your docker-compose file
and there would be one service with container name as "cli" and image
as hyperledger/fabric-tools)
When you are executing "docker exec -it cli bash", you are entering
into container of peer0.org1. It provides an interactive shell to
you.
Let us consider we want to install chaincode using cli on Pee1 Org1, create one test.sh file and write the following cammand inside test.sh file
docker exec -e "CORE_PEER_LOCALMSPID=Org1MSP" -e "CORE_PEER_ADDRESS=peer1.org1.example.com:7051" -e "CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/Admin#org1.example.com/msp" cli peer chaincode install -n cc_name -v cc_version -p cc_path
Here we are passing environment varibles CORE_PEER_ADDRESS and CORE_PEER_MSPCONFIGPATH.
Then just execute test.sh file, the chaincode will be installed on peer1 provided container is already running and environment varibles are correct. (please provide correct chaincode path, name, and version)
You should leverage the option given by #jeremysprofile. But still, if you want to achieve it through shell script then you might want to write a expect script along with shell script or might merge both the scripts in one. It should look something like this :
#!/usr/bin/expect -f
expect "*root*" {
send -- "<your command>\r"
send -- "exit \r"
}
Expect is a program that "talks" to other interactive programs according to a script. Following the script, Expect knows what can be expected from a program and what the correct response should be.
Here is the Linux man Page.

Entering text into a docker container via ssh from bash file

What I am trying to do is setup a local development database and to prevent everyone having to go through all the steps I thought it would be useful to create a script.
What I have below stop once it is in the terminal, which looks like:
output
./dbSetup.sh
hash of container 0d1b182aa6f1
/ #
At which point I have to manually enter exit.
script
#!/bin/bash
command=$(docker ps | grep personal)
set $command
echo "hash of container ${1}"
docker exec -it ${1} sh
Is there a way I can inject a command via a script into a dockers container terminal?
In order to execute command inside a container, you can use something like this:
docker exec -ti my_container sh -c "echo a && echo b"
More information available at: https://docs.docker.com/engine/reference/commandline/exec/
Your script finds a running Docker container and opens a shell to it. The "-it" makes it interactive and allocates a tty which is why it continues to wait for input, e.g. "exit". If the plan is to execute some commands to initialize a local development database, I'd recommend looking at building an image with a Dockerfile instead. i.e. Once you figure out the commands to run, they would become RUN commands and the container after docker run would expose a local development database.
If you really want some commands to run within the shell after it is started and maintain the session, depending on the base image, you might be able to mount a bash profile that has the required commands, e.g. -v db_profile:/etc/profile.d where db_profile is a folder with the shell scripts you want to run. To get them to run you'd exec sh -l to have the login startup scripts to run.

Running a script inside a docker container using shell script

I am trying to create a shell script for setting up a docker container. My script file looks like:
#!bin/bash
docker run -t -i -p 5902:5902 --name "mycontainer" --privileged myImage:new /bin/bash
Running this script file will run the container in a newly invoked bash.
Now I need to run a script file (test.sh)which is already inside container from the above given shell script.(eg: cd /path/to/test.sh && ./test.sh)
How to do that?
You can run a command in a running container using docker exec [OPTIONS] CONTAINER COMMAND [ARG...]:
docker exec mycontainer /path/to/test.sh
And to run from a bash session:
docker exec -it mycontainer /bin/bash
From there you can run your script.
Assuming that your docker container is up and running, you can run commands as:
docker exec mycontainer /bin/sh -c "cmd1;cmd2;...;cmdn"
I was searching an answer for this same question and found ENTRYPOINT in Dockerfile solution for me.
Dockerfile
...
ENTRYPOINT /my-script.sh ; /my-script2.sh ; /bin/bash
Now the scripts are executed when I start the container and I get the bash prompt after the scripts has been executed.
In case you don't want (or have) a running container, you can call your script directly with the run command.
Remove the iterative tty -i -t arguments and use this:
$ docker run ubuntu:bionic /bin/bash /path/to/script.sh
This will (didn't test) also work for other scripts:
$ docker run ubuntu:bionic /usr/bin/python /path/to/script.py
This command worked for me
cat local_file.sh | docker exec -i container_name bash
You could also mount a local directory into your docker image and source the script in your .bashrc. Don't forget the script has to consist of functions unless you want it to execute on every new shell. (This is outdated see the update notice.)
I'm using this solution to be able to update the script outside of the docker instance. This way I don't have to rerun the image if changes occur, I just open a new shell. (Got rid of reopening a shell - see the update notice)
Here is how you bind your current directory:
docker run -it -v $PWD:/scripts $my_docker_build /bin/bash
Now your current directory is bound to /scripts of your docker instance.
(Outdated)
To save your .bashrc changes commit your working image with this command:
docker commit $container_id $my_docker_build
Update
To solve the issue to open up a new shell for every change I now do the following:
In the dockerfile itself I add RUN echo "/scripts/bashrc" > /root/.bashrc". Inside zshrc I export the scripts directory to the path. The scripts directory now contains multiple files instead of one. Now I can directly call all scripts without having open a sub shell on every change.
BTW you can define the history file outside of your container too. This way it's not necessary to commit on a bash change anymore.
Thomio's answer is helpful but it expects the script to exist inside the image. If you have a one-of script that you want to run/test inside a container (from command-line or to be useful in a script), then you can use
$ docker run ubuntu:bionic /bin/bash -c '
echo "Hello there"
echo "this could be a long script"
'
Have a look at entry points too. You will be able to use multiple CMD
https://docs.docker.com/engine/reference/builder/#/entrypoint
If you want to run the same command on multiple instances you can do this :
for i in c1 dm1 dm2 ds1 ds2 gtm_m gtm_sl; do docker exec -it $i /bin/bash -c "service sshd start"; done
This is old, and I don't have enough reputation points to comment. Still, I guess it is worth sharing how one can generalize Marvin's idea to allow parameters.
docker exec -i mycontainer bash -s arg1 arg2 arg3 < mylocal.sh

How to force ssh to execute bash instead of the user default on the remote machine?

I want to execute a bash script with ssh but when I try this it's using ksh which is the user's default shell.
I can't change that default.
So, how can I trick ssh to execute my script with bash instead of the default shell?
Make this the first line of your script:
#!/usr/bin/env bash
Edit: As per this, the utility of /usr/bin/env is dubious. So, you probably want:
#!/bin/bash
Replace /bin/bash with the actual path of bash executable.
You can call your script explicitly with bash:
ssh <ssh-opts> bash <scriptname>
This way there will be a ksh executed at login, but inside ksh you start a bash executing your script.

Resources