I am trying to create a shell script that runs several scripts under CentOS 7. Each script starts with #!/bin/bash. Each script is tested and can be run as a standalone shell script with no problem.
Content of scripts_all.sh
#!/bin/bash
set -x
sudo yum install g++ make binutils cmake openssl-devel boost-devel zlib-devel
sh script_1.sh # executes all the contents of script_1.sh
sh script_2.sh # does not executed any of the command in script_2.sh.
sh script_3.sh
sh script_4.sh
script_1.sh
#!/bin/bash
sudo yum install centos-release-scl
sudo yum install devtoolset-9-gcc*
scl enable devtoolset-9 bash
which gcc
gcc --version
script_2.sh
#!/bin/bash
sudo yum install centos-release-scl-rh
sudo yum-config-manager --enable centos-release-scl-rh
sudo yum install devtoolset-9
scl enable devtoolset-9 bash
It appears that ./scripts_all.sh will successfully execute set -x, sudo yum, sh script_1.sh but stops at sh script_2.sh. It is worth noting that I can run sh script_2.sh with no issue independent of scripts_all.sh. I don't know why the rest of the scripts won't be run.
./scripts_all.sh prints the lines of sh script_1.sh and their executions but it never prints the lines of sh script_2.sh.
Could someone kindly help?
Copying comments into the semblance of an answer.
Change the sh script_1.sh etc lines to bash -x script_1.sh (or sh -x script_1.sh since the scripts don't seem to use any Bash-specific syntax) and monitor what's going on. Do you see the version information from gcc --version in script_1.sh?
gcc --version is only printed when I comment out scl enable devtoolset-9 bash. I ran scl enable devtoolset-9 bash and it does not output anything to the screen.
That suggests the scl command is not completing. Maybe it is waiting for input from the terminal. Do you see the output from which gcc when you include the scl command? If not, then it is close to certain that scl is trying to read from the terminal. I dunno what it's reading — it isn't a command I'm familiar with.
It is not waiting for any input. After execution, it brings the prompt again when I run it by itself.
If you're not seeing the which gcc and gcc --version output, then it is probable that the scl command is not completing, IMO. What does the bash at the end of the command options do? Does it run a bash process? If so, where is its input coming from? Running with the -x option (sh -x script_1.sh) would show you what is being executed, and whether scl is completing.
scl enable foo bar bash actually runs a bash instance with foo and bar Software Collections enabled. See https://linux.die.net/man/1/scl
OK; and what is that bash instance doing? Is it not waiting for input before it executes anything? It's a little surprising that there isn't a prompt, but not completely astonishing. Have you tried typing exit when scl hangs?
I just tried scl enable devtoolset-9 bash & echo "Enabling devtoolset-9" and it works and ultimately prints out the gcc --version.
Well, that & runs the scl command in background, leaving the echo to run, and then which gcc and gcc --version. Replace the & with a semicolon. Or replace the & with -c 'echo Hi' and a semicolon and see what happens.
Wonderful! Adding -c echo "Hi" made it work!
So that bash command specified at the end of scl enable devtoolset-9 bash was waiting for input from you, which is why it didn't terminate (and you don't see which gcc running) and so on. You've got the same issue at the end of script_2.sh — what about the other scripts? But you now know what's going on and can decide what to do about it. Using -c exit would simply terminate the shell (instead of echoing Hi), for example.
I'd need to study the scl command more, but do you actually need to use it in these scripts?
Related
I have this basic shell script that I'm invoking via an alias:
#!/bin/sh
cd /Users/tillman/t-root/dev/apps/actual-server &&
env /usr/bin/arch -x86_64 /bin/zsh --login &&
yarn start
It moves directory, changes the arch but does not execute yarn start
If I break this up into two consecutive commands (executing the first and then the second within iterm via different aliases), it works:
alias = intel
env /usr/bin/arch -x86_64 /bin/zsh --login
alias = abudget
cd /Users/tillman/t-root/dev/apps/actual-server
yarn start
Output:
~ intel ✔
~ abudget ✔
yarn run v1.22.19
$ node app
Initializing Actual with user file dir: /Users/tillman/t-root/dev/apps/actual-server/user-files
Listening on 0.0.0.0:5006...
Why is it that the first option, with all commands in one script, does not work?
You need the yarn start to be run by the copy of zsh, not run after that copy of zsh exits (which is what your code does now).
Consider using a heredoc or the -c argument to pass the code you want zsh to run on zsh's stdin:
#!/bin/sh
# ''|| exit'' prevents need to use && to connect to later commands
cd /Users/tillman/t-root/dev/apps/actual-server || exit
exec /usr/bin/arch -x86_64 /bin/zsh --login -c 'exec yarn start'
The execs are a performance enhancement, replacing the original shell with zsh, and then replacing the copy of zsh with a copy of yarn, instead of fork()ing subprocesses in which to run zsh and then yarn. (This also makes sending a signal to your script deliver that signal direct to yarn).
I would like to automatize a process running a .bat file but using the bash terminal in my WSL2. These are my commands in my .bat:
bash
cd ~/Dev/Folder
nvm use 14.15.0
npm start
But the .bat stops after running "bash". I also tried with bash && cd ~/Dev/Folder && nvm use 14.15.0 && npm start and also replacing "bash" with "wsl", but same result.
Maybe I'm doing something wrong, so I would appreciate some help with this.
bash starts a new Bash shell and waits for it to exit, like for any other terminal command. The subsequent commands are not redirected to Bash - they'll be executed once the Bash process exits.
To run commands within Bash, use the -c flag:
bash -c "cd ~/Dev/Folder && ls && nvm use 14.15.0 && npm start"
Also see What does 'bash -c' do?.
$(shell command -v docker) What command means? it's being used in a Makefile.
I saw this in a github repository that I'm trying to understand.
It looks like it's setting a variable with a command to test if docker is installed, and stop the build if its not, the problem is this that I don't have a command installed and I tryed to install command in ubuntu but it can't find it, looking on internet how to install this commad command seems realy difficult because of its name, how to install command in linux/ubuntu didn't bring anything useful, also search for this being using on Makefiles trying to get some clue but nothing so far.
Running the build command seems to work becuse it build the image and yes I have docker installed, but still getting that message in the terminal make: command: Command not found
Any idea?
make build output (trucated):
$ make build
make: command: Command not found
make: command: Command not found
docker build -t codelytv/typescript-ddd-skeleton:dev .
Sending build context to Docker daemon 1.023MB
.....
This is the Makefile:
.PHONY = default deps build test start clean start-database
IMAGE_NAME := codelytv/typescript-ddd-skeleton
SERVICE_NAME := app
# Test if the dependencies we need to run this Makefile are installed
DOCKER := $(shell command -v docker)
DOCKER_COMPOSE := $(shell command -v docker-compose)
deps:
ifndef DOCKER
#echo "Docker is not available. Please install docker"
#exit 1
endif
ifndef DOCKER_COMPOSE
#echo "docker-compose is not available. Please install docker-compose"
#exit 1
endif
default: build
# Build image
build:
docker build -t $(IMAGE_NAME):dev .
# Run tests
test: build
docker-compose run --rm $(SERVICE_NAME) bash -c 'npm run build && npm run test'
# Start the application
start: build
docker-compose up $(SERVICE_NAME) && docker-compose down
# Clean containers
clean:
docker-compose down --rmi local --volumes --remove-orphans
# Start mongodb container in background
start_database:
docker-compose up -d mongo
What it means is that the person who wrote this makefile wasn't careful enough to write it in a portable way.
The command command is part of the shell (which is why you won't see it if you look for it in the GNU make manual). Not only that, it's part of the bash shell specifically: it is not a POSIX sh standard command. The bash man page says:
command [-pVv] command [arg ...]
Run command with args suppressing the normal shell function
lookup. Only builtin commands or commands found in the PATH are
executed.
Basically, running command docker ... means that any shell alias or function named docker is ignored, and only the actual docker command is run.
However, GNU make always runs /bin/sh as its shell, including for both recipes and for the $(shell ...) function.
So, if you're on a system (such as Red Hat or CentOS or Fedora GNU/Linux) where the /bin/sh is a link to the bash shell, then the above command will work.
However, if you're on a system (such as Debian or Ubuntu GNU/Linux) where the /bin/sh is a link to a simpler POSIX shell such as dash, then the above command will not work.
In reality, this is not needed because there won't be any shell aliases or functions defined in the shell that make invokes, regardless. However, if the author wants to use bash shell features in their makefiles and allow them to work, they also need to tell make to use bash as its shell, by adding this to their makefile:
SHELL := /bin/bash
(of course this assumes that the user has a /bin/bash on their system, but...)
I've hit a snag with a shell script intended to run every 30 minutes in cron on a Redhat 6 server. The shell script is basically just a command to run a python script.
The native version python on the server is 2.6.6 but the python version required by this particular script is python 2.7+. I am able to easily run this on the command line by using the "scl" command (this example includes the python -V command to show the version change):
$ python -V
Python 2.6.6
$ scl enable python27 bash
$ python -V
Python 2.7.3
At this point I can run the python 2.7.3 scripts on the command line no problem.
Here's the snag.
When you issue the scl enable python27 bash command it starts a new bash shell session which (again) is fine for interactive commandline work. But when doing this inside a shell script, as soon as it runs the bash command, the script exits because of the new session.
Here's the shell script that is failing:
#!/bin/bash
cd /var/www/python/scripts/
scl enable python27 bash
python runAllUpserts.py >/dev/null 2>&1
It simply stops as soon as it hits line 4 because "bash" pops it out of the script and into a fresh bash shell. So it never sees the actual python command I need it to run.
Plus, if run every 30 minutes, this would add a new bash each time which is yet another problem.
I am reluctant to update the native python version on the server to 2.7.3 right now due to several reasons. The Redhat yum repos don't yet have python 2.7.3 and a manual install would be outside of the yum update system. From what I understand, yum itself runs on python 2.6.x.
Here's where I found the method for using scl
http://developerblog.redhat.com/2013/02/14/setting-up-django-and-python-2-7-on-red-hat-enterprise-6-the-easy-way/
Doing everything in one heredoc in the SCL environment is the best option, IMO:
scl enable python27 - << \EOF
cd /var/www/python/scripts/
python runAllUpserts.py >/dev/null 2>&1
EOF
Another way is to run just the second command (which is the only one that uses Python) in scl environment directly:
cd /var/www/python/scripts/
scl enable python27 "python runAllUpserts.py >/dev/null 2>&1"
scl enable python27 bash activates a python virtual environment.
You can do this from within a bash script by simply sourcing the enable script of the virtual environment, of the SCL package, which is located at /opt/rh/python27/enable
Example:
#!/bin/bash
cd /var/www/python/scripts/
source /opt/rh/python27/enable
python runAllUpserts.py >/dev/null 2>&1
Isn't the easiest to just your python script directly? test_python.py:
#!/usr/bin/env python
import sys
f = open('/tmp/pytest.log','w+')
f.write(sys.version)
f.write('\n')
f.close()
then in your crontab:
2 * * * * scl python27 enable $HOME/test_python.py
Make sure you make test_python.py executable.
Another alternative is to call a shell script that calls the python. test_python.sh:
#/bin/bash
python test_python.py
in your crontab:
2 * * * * scl python27 enable $HOME/test_python.sh
One liner
scl enable python27 'python runAllUpserts.py >/dev/null 2>&1'
I use it also with the devtoolsets on the CentOS 6.x
me#my_host:~/tmp# scl enable devtoolset-1.1 'gcc --version'
gcc (GCC) 4.7.2 20121015 (Red Hat 4.7.2-5)
Copyright (C) 2012 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
scl is the dumbest "let us try and lock you in` nonsense I've seen in a while.
Here's how I made it so I could pass arguments to a series of scripts that all linked to a single skeleton file:
$ cat /usr/bin/skeleton
#!/bin/sh
tmp="$( mktemp )"
me="$( basename $0 )"
echo 'scl enable python27 - << \EOF' >> "${tmp}"
echo "python '/opt/rh/python27/root/usr/bin/${me}' $#" >> "${tmp}"
echo "EOF" >> "${tmp}"
sh "${tmp}"
rm "${tmp}"
So if there's a script you want to run that lives in, say, /opt/rh/python27/root/usr/bin/pepper you can do this:
# cd /usr/bin
# ln -s skeleton pepper
# pepper foo bar
and it should work as expected.
I've only seen this scl stuff once before and don't have ready access to a system with it installed. But I think it's just setting up PATH and some other environment variables in some way that vaguely similar to how they're done under virtualenv.
Perhaps changing the script to have the bash subprocess call python would work:
#!/bin/bash
cd /var/www/python/scripts/
(scl enable python27 bash -c "python runAllUpserts.py") >/dev/null 2>&1
The instance of python found on the subprocess bash's shell should be your 2.7.x copy ... and all the other environmental settings done by scl should be inherited thereby.
I'm trying to start unicorn_rails in a ruby script, and after executing many commands in the script, when the script gets to the following line
%x[bash -ic "bash <(. ~/.bashrc); cd /home/www-data/rails_app; bundle exec unicorn_rails -p 8000 -E production -c /home/www-data/rails_app/config/unicorn.rb -D"]
the script stops, generating the following output
[1]+ Stopped ./setup_rails.rb
and returns to the Linux prompt. If I type "fg", the script finishes running, the line where the script had stopped gets executed and unicorn gets started as a daemon.
If I run the line in a separate script, the script completes without stopping.
UPDATE_1 -
I source .bashrc because earlier in the script I install rvm and to get it to run with the correct environment I have the following:
%x[echo "[[ -s \"$HOME/.rvm/scripts/rvm\" ]] && source \"$HOME/.rvm/scripts/rvm\"" >> .bashrc]
%x[bash -ic "bash <(. ~/.bashrc); rvm install ruby-1.9.2-p290; rvm 1.9.2-p290 --default;"]
So if I want to run correct version of rvm, ruby and bundle I need to source .bashrc
end UPDATE_1
Does anyone have any idea what could cause a ruby script to halt as if control-Z was pressed?
Not sure why it's stopping, but my general rule of thumb is to never source my .bashrc in a script -- that might be the source of your problem right there, but I can't be sure without seeing what's in it. You should be able to change your script to something like:
$ vi setup_rails.sh
#!/usr/bin/bash
# EDIT from comments below
# expanding from a one liner to a better script...
$RVM_PATH=$HOME/.rvm/scripts
# install 1.9.2-p290 unless it's installed
$RVM_PATH/rvm info 1.9.2-p290 2&>1 >/dev/null || $RVM_SH install 1.9.2-p290
# run startup command inside rvm shell
$RVM_PATH/rvm-shell 1.9.2-p290 -c "cd /home/www-data/rails_app && bundle exec unicorn_rails -p 8000 -E production -c /home/www-data/rails_app/config/unicorn.rb -D"
This should give you the same result.