bash heredoc hangs when more than 512 characters - bash

my gnu bash scripts
GNU bash, version 5.1.0(1)-release (x86_64-apple-darwin19.6.0)
on macos hang when they contain a heredoc with greater than 512 characters,
e.g. the USAGE heredoc below works unless I add 1 more character to it
cat <<'USAGE'
--all List all tasks, TASK_IDs will be ignored
--name NAME Only list tasks with specified NAME
--logs list log messages
--pending Only list tasks that have not been scheduled
--active same as --pending
--scheduled Only List tasks that have been scheduled, whether running or finished
--running Only List tasks that are currently executing / running
--finished Only List tasks that have been run, i.e., have finished
12345678901234567890
USAGE
note: there are no variable expansions, quotes, etc. just literal text.
If i break all of the text into multiple heredocs they all work ... but if i combine them in anyway to create a heredoc with >512 characters bash hangs
what am i doing wrong?

Well - the problem has disappeared. Maybe related to recent bash upgrade to
GNU bash, version 5.1.4(1)-release (x86_64-apple-darwin19.6.0)
which did have changes to heredoc processing related to size of heredoc wrt buffer sizes.

The same thing recently started happening to me. I recognize that this isn't a very satisfying answer, but I just switched back to bash 3.2.57(1)-release, that comes pre-installed with mac. (I spent several hours trying to figure out how to use homebrew to rollback to an earlier version of bash, but, as of 2020, this no longer seems to be a supported feature.)
I ran chsh -s /bin/bash, to change my default shell to the Mac-default bash.
I then then re-arranged my path so that it would find /bin/bash before /usr/local/bin/bash. (If a script has a #!/usr/bin/env bash shebang, it finds the older version.)

I have encounter the same issue. With help of a colleague we trace a quite odd behaviour in one of our tools to this exact issue, when the string to pipe into <<< exceeds 512 characters the command hangs.
We both have the same environment, it works for him it does not for me.
macOS Catalina 10.15.7
Bash installed with brew
Bash version GNU bash, version 5.1.8(1)-release (x86_64-apple-darwin19.6.0)
Kernel Darwin Kernel Version 19.6.0
When using Mac included Bash it works (GNU bash, version 3.2.57(1)-release (x86_64-apple-darwin19) but I cannot use that version.
This is driving us nuts.

Experienced this with bash version 5.1.16(1)-release (x86_64-apple-darwin21.1.0) and fixed via reboot.

This is not really an answer to the question but a suggestion to simplify the code:
echo "
--all List all tasks, TASK_IDs will be ignored
--name NAME Only list tasks with specified NAME
--logs list log messages
--pending Only list tasks that have not been scheduled
--active same as --pending
--scheduled Only List tasks that have been scheduled, whether running or finished
--running Only List tasks that are currently executing / running
--finished Only List tasks that have been run, i.e., have finished
12345678901234567890 "
It will do the same but in a easier way.

Related

write to file is empty when I write from within a docker img

The code below is some step that is part of a MakeFile which is executed when a certain stage is run in ci/cd.
.PHONY:deploy_endpoint_configuration
deploy_endpoint_configuration:
#echo Deploying endpoint configuration
gcloud endpoints services deploy ocr/model_adapter/predict_contracts.yaml --project $(project) &> $(project_dir)/deploy_contract.txt
cat $(project_dir)/deploy_contract.txt | grep -Eo '\d*-\d*-\w*' | tail -n1 > $(project_dir)/config_id.txt
What I'm experiencing is that the content of deploy_contract.txt etc is always empty when this piece of code is executed in the pipeline (e.g. GitLab). I don't understand why? Has this to do w/ the fact that MakeFile executes a new shell for every command? Not entirely sure though, and yet this is hard to debug. I do confirm this issue when I run it as followed: gitlab-runner exec docker -here the stage- (for local debug purpose). But when I run it locally on macOS (i.e. only execute make deploy_endpoint_configuration), thus it's not wrapped in a container as before, it runs and functions like it should (read as the content of config_id and deploy_contract is not empty but contains the stdout + errorout)
for reference:
image used in the ci/cd stage = image: dsl.company.com:5000/python:3.7-buster
on top of that gcloud cli is installed to make use of gcloud commands.
Anyone an idea to why no content is written to my files? (it's for sure deploying though - so there must be something)
My suspicion is that it's because your command script relies on bash features. GNU make will always run /bin/sh as its shell. On some systems (like RedHat and RedHat-derived systems) /bin/sh is actually a link to bash and so makefile recipes that use bash-specific features will work. I believe MacOS does the same, although I don't do Mac.
On other systems, like Debian and Debian-derived systems like Ubuntu, /bin/sh is a simple, fast POSIX-compliant shell like dash, which doesn't support fancy bash things and so makefile recipes that use bash-specific features will not work.
Probably your container image is one of the latter type, while MacOS (your local system) uses bash as the shell.
You are using &> which is a bash-specific feature, that is not supported by POSIX shells.
You should write this using POSIX syntax, which is >$(project_dir)/deploy_contract.txt 2>&1
Either that or you can try overriding the shell in your makefile:
SHELL := /bin/bash
to force make to run bash instead... this will work as long as the container image you're running does actually have bash in it.

issue with macOs monterey terminal (zsh)

i am using a macOs Monterey latest version
my terminal is zsh
it looks like user#users-MacBook-Pro ~ %
I noticed most videos tend to show a $ dollar sign at the end instead of % percentage which is what i have
is my terminal not in the correct setting or is okay for % sign at the end ?
confused , as i am learning how to use terminal
That is how the prompt of the zsh shell is. What you are seeing in your videos is the prompt of a bash shell which tipically ends in # for root and $ for all the other users.
This is not an issue per se, but it could be if the commands or scripts used in the videos rely on special features of the bash shell. You have to consult the documentation of what you are about to use in order to find out if there are incompatibilities.
Since switching to a temporary shell is simple, you could use bash when you watch a video that assumes it is your shell.

backgrounding a process with & (ampersand) does not work in bash

I'm trying to script tmux with a bash script.
Here's the line that I'm using to execute another script in the background (quotes are part of the line):
"$CURRENT_DIR/scripts/continuum_restore.sh" &
That works just fine on my machine. The problem is, it seems that the above line is *not backgrounded* for another user that uses the script. The script is executed synchronously in his case.
Interestingly, we tried backgrounding a random process from the login shell (sleep 10 &) and things worked okay in that case.
Here's a github issue the user opened.
Here's all the info I have for the user's computer where backgrounding does not seem to work:
OSX 10.10.1
bash --version is GNU bash, version 3.2.53(1)-release (x86_64-apple-darwin14)
users' tmux version is 1.9a (if this matters)
Here's the whole script that contains the line that is not backgrounded, link. The problematic line is 56.
Ideally I'd like to first reproduce the issue consistently and then try to fix it. Here are the things I've tried to reproduce the issue (unsuccessfully):
set +m (setting this just before the line with ampersand)
stty susp undef (setting this just before the line with ampersand)
setting both of the above
The specific question I have is: what option can be used to disable backgrounding functionality in bash/shell? (so I can reproduce the above issue)
You can try to redirect output. It could prevent job to run in a background.
nohup "$CURRENT_DIR/scripts/continuum_restore.sh" >/tmp/output.log &

Why is #!/usr/bin/env bash superior to #!/bin/bash?

I've seen in a number of places, including recommendations on this site (What is the preferred Bash shebang?), to use #!/usr/bin/env bash in preference to #!/bin/bash. I've even seen one enterprising individual suggest using #!/bin/bash was wrong and bash functionality would be lost by doing so.
All that said, I use bash in a tightly controlled test environment where every drive in circulation is essentially a clone of a single master drive. I understand the portability argument, though it is not necessarily applicable in my case. Is there any other reason to prefer #!/usr/bin/env bashover the alternatives and, assuming portability was a concern, is there any reason using it could break functionality?
#!/usr/bin/env searches PATH for bash, and bash is not always in /bin, particularly on non-Linux systems. For example, on my OpenBSD system, it's in /usr/local/bin, since it was installed as an optional package.
If you are absolutely sure bash is in /bin and will always be, there's no harm in putting it directly in your shebang—but I'd recommend against it because scripts and programs all have lives beyond what we initially believe they will have.
The standard location of bash is /bin, and I suspect that's true on all systems. However, what if you don't like that version of bash? For example, I want to use bash 4.2, but the bash on my Mac is at 3.2.5.
I could try reinstalling bash in /bin but that may be a bad idea. If I update my OS, it will be overwritten.
However, I could install bash in /usr/local/bin/bash, and setup my PATH to:
PATH="/usr/local/bin:/bin:/usr/bin:$HOME/bin"
Now, if I specify bash, I don't get the old cruddy one at /bin/bash, but the newer, shinier one at /usr/local/bin. Nice!
Except my shell scripts have that !# /bin/bash shebang. Thus, when I run my shell scripts, I get that old and lousy version of bash that doesn't even have associative arrays.
Using /usr/bin/env bash will use the version of bash found in my PATH. If I setup my PATH, so that /usr/local/bin/bash is executed, that's the bash that my scripts will use.
It's rare to see this with bash, but it is a lot more common with Perl and Python:
Certain Unix/Linux releases which focus on stability are sometimes way behind with the release of these two scripting languages. Not long ago, RHEL's Perl was at 5.8.8 -- an eight year old version of Perl! If someone wanted to use more modern features, you had to install your own version.
Programs like Perlbrew and Pythonbrew allow you to install multiple versions of these languages. They depend upon scripts that manipulate your PATH to get the version you want. Hard coding the path means I can't run my script under brew.
It wasn't that long ago (okay, it was long ago) that Perl and Python were not standard packages included in most Unix systems. That meant you didn't know where these two programs were installed. Was it under /bin? /usr/bin? /opt/bin? Who knows? Using #! /usr/bin/env perl meant I didn't have to know.
And Now Why You Shouldn't Use #! /usr/bin/env bash
When the path is hardcoded in the shebang, I have to run with that interpreter. Thus, #! /bin/bash forces me to use the default installed version of bash. Since bash features are very stable (try running a 2.x version of a Python script under Python 3.x) it's very unlikely that my particular BASH script will not work, and since my bash script is probably used by this system and other systems, using a non-standard version of bash may have undesired effects. It is very likely I want to make sure that the stable standard version of bash is used with my shell script. Thus, I probably want to hard code the path in my shebang.
There are a lot of systems that don't have Bash in /bin, FreeBSD and OpenBSD just to name a few. If your script is meant to be portable to many different Unices, you may want to use #!/usr/bin/env bash instead of #!/bin/bash.
Note that this does not hold true for sh; for Bourne-compliant scripts I exclusively use #!/bin/sh, since I think pretty much every Unix in existence has sh in /bin.
For invoking bash it is a little bit of overkill. Unless you have multiple bash binaries like your own in ~/bin but that also means your code depends on $PATH having the right things in it.
It is handy for things like python though. There are wrapper scripts and environments which lead to alternative python binaries being used.
But nothing is lost by using the exact path to the binary as long as you are sure it is the binary you really want.
#!/usr/bin/env bash
is definitely better because it finds the bash executable path from your system environment variable.
Go to your Linux shell and type
env
It will print all your environment variables.
Go to your shell script and type
echo $BASH
It will print your bash path (according to the environment variable list) that you should use to build your correct shebang path in your script.
I would prefer wrapping the main program in a script like below to check all bash available on system. Better to have more control on the version it uses.
#! /usr/bin/env bash
# This script just chooses the appropriate bash
# installed in system and executes testcode.main
readonly DESIRED_VERSION="5"
declare all_bash_installed_on_this_system
declare bash
if [ "${BASH_VERSINFO}" -ne "${DESIRED_VERSION}" ]
then
found=0
all_bash_installed_on_this_system="$(\
awk -F'/' '$NF == "bash"{print}' "/etc/shells"\
)"
for bash in $all_bash_installed_on_this_system
do
versinfo="$( $bash -c 'echo ${BASH_VERSINFO}' )"
[ "${versinfo}" -eq "${DESIRED_VERSION}" ] && { found=1 ; break;}
done
if [ "${found}" -ne 1 ]
then
echo "${DESIRED_VERSION} not available"
exit 1
fi
fi
$bash main_program "$#"
Normally #!path/to/command will trigger bash to prepend the command path to the invoking script when executed. Example,
# file.sh
#!/usr/bin/bash
echo hi
./file.sh will start a new process and the script will get executed like /bin/bash ./file.sh
Now
# file.sh
#!/usr/bin/env bash
echo hi
will get executed as /usr/bin/env bash ./file.sh which quoting from the man page of env describes it as:
env - run a program in a modified environment
So env will look for the command bash in its PATH environment variable and execute in a separate environment where the environment values can be passed to env like NAME=VALUE pair.
You can test this with other scripts using different interpreters like python, etc.
#!/usr/bin/env python
# python commands
Your question is biased because it assumes that #!/usr/bin/env bash is superior to #!/bin/bash. This assumption is not true, and here's why:
env is useful in two cases:
when there are multiple versions of the interpreter that are incompatible.
For example python 2/3, perl 4/5, or php 5/7
when the location depends on the PATH, for instance with a python virtual environment.
But bash doesn't fall under any of these two cases because:
bash is quite stable, especially on modern systems like Linux and BSD which form the vast majority of bash installations.
there's typically only one version of bash installed under /bin.
This has been the case for the past 20+ years, only very old unices (that nobody uses any longer) had a different location.
Consequently going through the PATH variable via /usr/bin/env is not useful for bash.
Add to these three good resons to use #!/bin/bash:
for system scripts (when not using sh) for which the PATH variable may not contain /bin.
For example cron defaults to a very strict PATH of /usr/bin:/bin which is fine, sure, but other context/environments may not include /bin for some peculiar reason.
when the user screwed-up his PATH, which is very common with beginners.
for security when for example you're calling a suid program that invokes a bash script. You don't want the interpreter to be found via the PATH variable which is entirely under the user's control!
Finally, one could argue that there is one legitimate use case of env to spawn bash: when one needs to pass extra environment variables to the interpreter using #!/usr/bin/env -S VAR=value bash.
But this is not a thing with bash because when you're in control of the shebang, you're also in control of the whole script, so just add VAR=value inside the script instead and avoid the aforementioned problems introduced by env with bash scripts.

Unable to execute shell script in Cygwin as a KornShell script

I rarely touch shell scripts, we have another department who write them, so I have an understanding of writing them but no experience. However they all appear rather useless with my issue.
I am trying to execute some KornShell (ksh) scripts on a windows based machine using Cygwin- we use these to launch our Oracle WebLogic servers, now it simply will not execute. I used to be able to execute these exact same scripts fine on my old machine.
Now I have narrowed this down to the fact the 'magic number' or whatever it is at the start of the script where it specifies the script interpreter path:
i.e.:
#!/bin/ksh
if I change it to execute as a simple bash it works i.e:
#!/bin/sh
I went through checking the packages installed for cygwin - now the shells I installed are:
mksh MirdBSD KornShell
bash the bourne again shell
zsh z shell
Should I expect to see a ksh.exe in my cygwin/bin directory? there is a system file 'ksh' which I was making an assume somehow associates it with one of the other shell exes, like mksh.exe
I understand my explanation may well be naff. But that being said, any help would be very much appreciated.
Thanks.
I believe the MirBSD korn shell is called mksh. You can verify this and look for the correct path by typing
% which mksh
% which ksh
or if you have no which,
% type -p mksh
% type -p ksh
or if that fails too, check /etc/shells which should list all valid shells on a system:
% grep ksh /etc/shells
You need to put the full path after the #! line. It will probably be /bin/mksh, so your line needs to look like:
#!/bin/mksh
You've probably fixed it by now, but the answer was no, your Cygwin does not (yet) know about ksh.
I solved this problem by launching the cygwin setup in command-line mode with the -P ksh attribute (as described in http://www.ehow.com/how_8611406_install-ksh-cygwin.html).
You can run a ksh using a bat file
C:\cygwin\bin\dos2unix kshfilename.ksh
C:\cygwin\bin\bash kshfilename.ksh
Running a shell script through Cygwin on Windows
Install KornShell (ksh) into Cygwin by the following process:
Download: ksh.2012-08-06.cygwin.i386.gz
Install ksh via Cygwin setup.
Execture Cygwin setup.exe
Choose: Install from Local Directory
Select the ksh.2012-08-06.cygwin.i386.gz as the Local Package Directory.
Complete Cygwin setup.
Restart Cygwin.

Resources