backgrounding a process with & (ampersand) does not work in bash - bash

I'm trying to script tmux with a bash script.
Here's the line that I'm using to execute another script in the background (quotes are part of the line):
"$CURRENT_DIR/scripts/continuum_restore.sh" &
That works just fine on my machine. The problem is, it seems that the above line is *not backgrounded* for another user that uses the script. The script is executed synchronously in his case.
Interestingly, we tried backgrounding a random process from the login shell (sleep 10 &) and things worked okay in that case.
Here's a github issue the user opened.
Here's all the info I have for the user's computer where backgrounding does not seem to work:
OSX 10.10.1
bash --version is GNU bash, version 3.2.53(1)-release (x86_64-apple-darwin14)
users' tmux version is 1.9a (if this matters)
Here's the whole script that contains the line that is not backgrounded, link. The problematic line is 56.
Ideally I'd like to first reproduce the issue consistently and then try to fix it. Here are the things I've tried to reproduce the issue (unsuccessfully):
set +m (setting this just before the line with ampersand)
stty susp undef (setting this just before the line with ampersand)
setting both of the above
The specific question I have is: what option can be used to disable backgrounding functionality in bash/shell? (so I can reproduce the above issue)

You can try to redirect output. It could prevent job to run in a background.
nohup "$CURRENT_DIR/scripts/continuum_restore.sh" >/tmp/output.log &

Related

Bash script for setting up mac keeps skipping commands or simply prints to the console without executing lines

I've been trying to create a bash script that allows me to transfer my existing dev setup to a brand new macbook.
I set up a bash script which is supposed to automate this process but for some reason when I call the script using curl, it doesnt seem to reliably run the whole thing and I cant figure out why that is. example of commands being printed to the console and note executed
If I were to manually enter each line into the terminal and execute, things work as expected however doing so defeats the purpose of the script.
I'll attach some screenshots of the terminal output so you can see the exact issues I'm facing and at which point it behaves oddly.
I've had to run the script a few times to get it to execute the skipped steps but it would be good to understand why certain steps are getting missed. Here's a link to my gist containing the script. Would appreciate any suggestions for improvements or explantations for the behaviour I'm seeing.
Things I have tried that havent resolved my issue:
Splitting the script into two smaller scripts
Erasing my mac and running the script again (done this several times)
Adding sleep 5 between each command
edit: this is how I'm running the script
sudo curl -Lks https://gist.githubusercontent.com/curtis-j-campbell/b695513a44393c3a5084c011c6d0c890/raw | /bin/bash
Thanks in advance
It appears that everything after brew install git is being echoed. That suggests that something in that command is copying its stdin to stdout, so it's processing the rest of the script. Change that line to
brew install git </dev/null
so it won't read the script as its stdin.
Also, you don't need to run curl under sudo. If you need privileges to install the program, you should run bash as the superuser, not curl.
curl -Lks https://gist.githubusercontent.com/curtis-j-campbell/b695513a44393c3a5084c011c6d0c890/raw | sudo /bin/bash

-echoctl not being ackonowledged in forked program

Please,
I have a terminal application that requires no echoing of control characters back to the terminal.
I can happily issue 'stty -echoctl' at a terminal, run my application and obtain the result I am after. Further, I can include 'stty -echoctl' in .bashrc and everything is fine. (I have also added it to .profile but that seems to bring in .bashrc anyway)
I can then open another terminal (type 'konsole/gnome-terminal/xterm' in the original console and again I get the result I expect.
The problem I have (and this is in preparation of forking the program form another application) is that if i try
$ xterm -e ./V2.13
or even
$ xterm -e bash -c ./V2.13
the control characters are in fact echoed back to my app.??
EDIT:Additionally is there any need (benefit) in executing bash to execute my application ?

Fish shell startup command prints � characters on Hyper

I have the following problem with the fish shell on my mac. I recently figured how to modify the ~/.config/fish/config.fish to automatically run a command upon starting the terminal.
Now, I want to run a particular script that prints some ASCII art, and it works just fine on the regular terminal app (so it shouldn't be a problem with fish or the script, I think) but prints only the � character in the Hyper terminal (Hyper.is). Now the trick is, if I just run the script in the shell, it works just fine.
My question is ; does anyone know why it doesn't work when launched on startup by fish, but works fine when I launch it ?
Solved the problem - I added a very small delay to the script before printing the ASCII art, and now everything works. I think it may be because executing this immediately upon launching of Hyper messed up with a plugin I used (I'm thinking about the hyper full plugin)

In Bash, how can I tell if I am currently in a terminal

I want to create my own personal logfile that logs not only when I log in and out, but also when I lock/unlock my screen. Kindof like /var/log/wtmp on steroids.
To do this, I decided to run a script when I log into Ubuntu that runs in the background until I quit. My plan to do this is to add the script to .bashrc, using ./startlogging.sh & and in the script I will use trap to catch signals. That's great, except .bashrc gets run every time I open a new terminal, which is not what I want for the logger.
Is there a way to tell in Bash that the current login is a gnome login? Alternatively, is there some sort of .gnomerc I can use to run my script?
Edit: Here is my script:
Edit 2: Removed the script, since it's not related to the question. I will repost my other question, rather than repurpose this one.
Are you looking for a way to detect what type of terminal it is?
Try:
echo $TERM
From Wikipedia:
TERM (Unix-like) - specifies the type of computer terminal or terminal
emulator being used (e.g., vt100 or dumb).
See also: List of Terminal Emulators
for bash use : ~/.bash_logout
that will get executed when you logout, which sounds like what you are trying to do.
Well, for just bash, what you want are .bash_login/.bash_logout in your home directory (rather than .bashrc) These are run whenever a LOGIN shell starts/finishes, which happens any time you log in to a shell (on a tty or console, or via ssh or other network login). These are NOT run for bash processes created to run in terminal windows that you create (as those are not login shells) so won't get run any time you open a new terminal.
The problem is that if you log in with some mechanism that does not involve a terminal (such as gdm running on the console to start a gnome or kde or unity session), then there's no login shell so .bash_login/logout never get run. For that case, the easiest is probably to put something in your .xsessionrc, which will get run every time you start an X session (which happens for any of those GUI environments, regardless of which one you run). Unfortunately, there's no standard script that runs when an X session finishes.

bash script on cygwin - seems to get stuck between consecutive commands.

I am using a bash script to run a number of application (some repeatedly) on a Windows machine through cygwin. The script contains commands to launch those applications, line by line. Most of these applications run for many minutes and many times I have observed that the i+1 th application is not being started even after i th application is completed. In such cases, if I press enter in the cygwin console on which the bash script is running, the next application starts running. Is it because of any issue with bash on cygwin? Or is it an issue with the Windows OS itself? Have any of you observed such an issue with bash + cygwin + Windows?
Thanks.
I think I have seen this before.
Instead of
somecommand
try
somecommand </dev/null
If that doesn't work, try
cmd /c somecommand
Or experiment with other redirections, e.g.
somecommand >/dev/null
Sounds like you may have a problem with your shell script encoding; DOS (and Windows) uses CR+LF line endings, whereas Linux uses LF endings. Try saving the file as LF.
What might also be going on:
When I was running Cygwin on a school laptop, I encountered a dramatic slowing of shell scripts vs. when they were running in a native Linux environment. This was especially apparent when running a configure script from GNU Autotools.
Check your path for slow drives. (From the Cygwin FAQ):
Why is Cygwin suddenly so slow?
If suddenly every command takes a very long time, then something is probably attempting to access a network share. You may have the obsolete //c notation in your PATH or startup files. Using //c means to contact the network server c, which will slow things down tremendously if it does not exist.
You might also want to check whether you have an antivirus program running. Antivirus programs tend to scan every single executable file as it is executed; this can cause problems for even simple shell scripts that run hundreds or even thousands of individual programs before they run their course.
This mailing list post outlines what is needed to pseudo-mount the main /usr/bin directory as cygexec. I'm not sure what that does, but I found it helped.
If you're running a configure script, try the -C option.
Hope this helps!
Occasionally, I'll get this behaviour because I have accidentally deleted the 'she-bang' at the top of the script, that is, deleted the #!/bin/bash on the first line of the script.
It's even more likely for this to happen when a parent shell script calls a child script that has the she-bang missing!
Hope this helps.
A bit of a long shot, but I have seen some similar behaviour previously.
In Windows 2000, if any program running in a command prompt window had some of it's text highlighted by the cursor, it would pause the command running, and you had to press enter or clear the highlighting to get the command prompt to continue executing.
As I said, bit of a long shot, but accidental mouse clicks could be your issue...
Install cygwin with unix style line breaks and forget weird problems like that.
Try saving your script as "the-properly-line-broken-style" for your cygwin. That is, use the style you specified under installation.
Here is some relevant information:
https://stackoverflow.com/a/7048200/657703

Resources