Why does git freeze on git-credential-osxkeychain sometimes? - macos

When I do cd some-repo; git push origin master in my bash terminal, it doesn't ask me for username/password because I guess git has already saved that (it was so long ago that I don't remember the details of how that went down). I'm pushing to a GitHub repo as the remote origin.
So I have a C++ program that does a fork and
execl("/bin/bash", "/bin/bash", "-c", "cd some-repo; git push origin master", (char *)0);
Then waits for the child bash process to finish.
Sometimes it works just fine, but other times (seemingly randomly) it will freeze up. Looking at the running process hierarchy, I see:
MyProgram
git
git-remote-http
git
git-credential-osxkeychain
If I kill the child-most git-credential-osx process, my program resumes (because the parent-most git command finishes), with not surprising output such as:
error: git-credential-osxkeychain died of signal 15
error: RPC failed; result=7, HTTP code = 0
fatal: The remote end hung up unexpectedly
fatal: The remote end hung up unexpectedly
My question: why does git push origin master always seem to work (without asking me for any username password or other stdin) in a bash terminal, but hangs (probably asks for something on stdin) on git-credential-osxkeychain sometimes but not other times when I run it from my C++ program?
I tried looking for man page on git-credential-osxkeychain and couldn't really find anything. Running it only prints Usage: git credential-osxkeychain <get|store|erase> which isn't self-explanatory enough for me. Thank you!
I'm running OS X 10.8.3; git version 1.7.12.4 (Apple Git-37); GNU bash, version 3.2.48(1)-release (x86_64-apple-darwin12).

Without much information, my guess is that the hanging depends on whether or not your login keychain is locked at the time. On the Mac, if the login keychain is unlocked, then the query to get your username and password can proceed unhindered. But if the keychain is locked, then then Mac OS X wants to prompt you for your login password to unlock the keychain. I suspect the dialog box is there, just hidden behind something, so you may have missed it. It'll wait for you to type in your password, effectively hanging the process.
There is more information on the gitcredential infrastructure here, and more about the API (including the command line for a helper) here.

Related

Is there a way to get my laptop to beep from within a bash script running on a remote server via SSH?

I have a bash script that I have to regularly run on a remote server. Part of the script includes running a backup which takes a while, and after it has run, I have to hit "Y" to confirm that the backup worked before the script will continue.
I would like to know if there is a way to get my laptop to make a beep (or some sort of sound) when that happens. I know that echo -e '\a' makes a beep, but if I run it from within a script on the remote server, the beep happens on the remote server.
I have control of the script that is being run, so I could easily change it to do something special.
You could send the command through ssh back to your computer like:
ssh user#host "echo -e '\a'"
Just make sure you have ssh key authentication from your server to your computer so the command can run smoothly
In my case the offered solutions with echo didn't work. I'm using a macbook and connect to an ubuntu system. I keep the terminal open and I'd like to be informed when a long running bash script is ready.
What I did notice is that if I shutdown the remote system then it will beep the macbook and show an alarm icon on the relevant tab. So I have now implemented a bit of dirty workaround:
sudo shutdown 1440 && shutdown -c
This will initiate the system to shutdown and will immediately cancel the request. And I do get the alarm beep + icon. You will need to setup sudo to allow the user to permit shutdown. As it was my own remote server it was no problem but could limit the usability for others.

rsync hangs after transfer over ssh

I wrote a bash script that backs up files from a webserver (HostGator) to a local file server running FreeBSD.
I use rsync over ssh (from the file server) to connect to the remote server (I already have pre-shared rsa keys setup). When I run the following line to start the sync, the files all seem to come in just fine, but the command never returns and the script just hangs forever:
/usr/local/bin/rsync -az --chown=root:admin --chmod=ugo=rwX --exclude ".inode_lock" --rsh='ssh -p2222' admin#domain.com:/home/admin/ '/mnt/blah/blah/LocalBackup/' >> "./Logs/Backup Log.txt"
After waiting a few minutes, when I hit Ctrl+C to stop the command, I poops out the following error messages:
^CKilled by signal 2.
rsync error: unexplained error (code 255) at rsync.c(636) [generator=3.1.2]
rsync error: received SIGUSR1 (code 19) at main.c(1429) [receiver=3.1.2]
This still happens even if the both sides are already synced and it is just checking for changes.
I'm not sure what do to do troubleshoot the problem. I did try removing the -v switch for rsync as some users reported that caused hangs, but I saw no differences.
EDIT
One more additional note. I ran the script again today to continue to troubleshoot. If I leave the script running without disturbing it after it hangs, eventually I receive the following message:
rsync: connection unexpectedly closed (2984632408 bytes received so far) [receiv er]
rsync error: error in rsync protocol data stream (code 12) at io.c(226) [receive r=3.1.2]
rsync: connection unexpectedly closed (8689703 bytes received so far) [generator ]
rsync error: unexplained error (code 255) at io.c(226) [generator=3.1.2]
and then returns back to the command prompt. I'm think this might be due to a timeout on the remote server's end but not sure. But I'm still not sure why the hang is happening though.
UPDATE
I did an additional test and limited the rsync transfer to a specific test folder with some sample files and subfolders, rather than grabbing the entire home directory. When I did this, it was able to successfully complete the transfeer and exit appropriately. So it appears that there must be somee file or folder somewhere in the home directory of the server that is causing the problem. Are there any specific cases where rsync wouldn't be able to transfer a file? I have seen it throw errors while trying to sync files that are write-locked, with a "Permission Denied" error, but even these files didn't stop it from continuing on. Any thoughts?
As an additional note, the remote server I'm connecting to is on a shared hosting account so I don't have root access. I don't know if this could be causing some problems?
UPDATE 2
So I studied the rsync command and added a couple more commandline parameters --progress and --stats (along with --verbose) so I could better understand where it is dying. What I noticed now is that when running the command, where it was hanging was on a verrry large file that was being downloaded from the server. But now with the --progress being reported (I am having it output directly to the terminal for the moment rather than a file), it seems to be moving along just fine, with no hangups so far.
I am now beginning to suspect that maybe the ssh connection is timing out or something due to inactivity? Especially since in the original situation, nothing gets output from the function for a long time while the large file transfer is happening. Is this a possible scenario? If so, what could I do to hold the connection open? (I'm not sure it's a good idea to print the --progress updates directly to the log file).
OK, I figured it out. Apparently HostGator's Shared servers have an SSH timeout limit of 30-45 minutes set by default. Since running rsync took longer than that limit, it was closing the connection on. I called and spoke to their tech support and they got it increased for my server.

Heroku login success, but then freezes

I am using Git Bash on Windows.
I just verified my Heroku account,
then I open git bash and type:
$ heroku login
bash responds:
heroku: Press any key to open up the browser to login or q to exit:
Opening browser to https://cli-auth.heroku.com/auth/browser/c31ddaf9-7a55-4daf-afad-a0500e924c26
heroku: Waiting for login...
Logging in... done
Logged in as [my mail address]
and then, I can type whatever I want, but it does not get and execute the command, behaving like a text editor. Then, when I click on the cross to close it, a warning message appears, telling that some processes are still going on.
How do I unlock the freezing and go on using Bash?
I solved it by hitting CTRL+C to break out and chose yes (type y).
It turns out that the git bash terminal has the frequent problem of turning "frozen" after you enter a command.
Whenever it happens, you should press CTRL+C

system("git push 2>&1") works fine, but %x(git push 2>&1) hangs. Why?

I am using Ruby. I am trying to figure out why bundler's rake release hangs on the git push step, as also discussed inconclusively here.
I've narrowed it down to this line of code hanging:
`git push 2>&1`
I can reproduce the problem by running the same line of code in IRB.
What's mysterious is that the underlying git push does in fact execute, but for some reason Ruby never receives the return status. It just waits for the child process indefinitely.
Inspecting the process listing shows that the child process has Z+ (zombie?) status:
UID PID PPID C STIME TTY TIME CMD USER PGID SESS JOBC STAT TT
501 23397 3757 0 1:44PM ttys001 0:00.54 irb mbrictson 23397 0 1 S+ s001
501 26035 23397 0 2:06PM ttys001 0:00.00 (sh) mbrictson 23397 0 1 Z+ s001
Obviously, git push runs just find in my shell. It's just when it is invoked via Ruby using backticks that it hangs.
Also, this works fine:
system("git push 2>&1") # => true
And this (i.e. with out the output redirection) works fine too!
`git push` # => "Everything up-to-date"
Part of the problem is apparently ControlMaster auto in my ~/.ssh/config. When executing git push, this causes a new control connection process to be spawned in the background. Perhaps %x(git push 2>&1) is waiting for this background process to exit? If I disable ControlMaster in my SSH config, this does in fact solve the problem.
Still, this bothers me. I would rather not have to disable ControlMaster simply to make Ruby's backticks operator happy.
Can anyone explain:
Why does %x() hang but system() does not?
Why does removing 2>&1 make a difference?
This is on Mac OS X Yosemite with Ruby 2.2.0.
Figured it out:
Why does %x() hang but system() does not?
%x() waits to fully read the output of the child process; system() does not care about the output.
According to this bug report, the ControlPersist setting in OpenSSH causes stderr to be left open for the lifetime of the master connection. In my SSH config, I have ControlPersist 5m, and sure enough, %x() hangs for exactly 5 minutes before finally completing.
This doesn't affect system() because system does not wait on the output.
Why does removing 2>&1 make a difference?
As explained above, the SSH master connection leaves stderr open. It apparently closes stdout. Since stdout is closed, %x(git push) finishes immediately because there is nothing to wait for on stdout. When 2>&1 is added to the command, this causes stderr to be redirected to stdout. Since stderr is being left open by the master connection, this in turn causes stdout to remain open. %x waits on stdout and hangs.
Unfortunately this behavior of OpenSSH shows no sign of being changed, so there is no satisfying solution other than disabling ControlPersist.

Bash: Script output to terminal session stops but script finishes normal

I'm opening an ssh-session to a remote server and execute a larger (around 1000 lines) bash-script on the remote machine. It involves several very CPU-intensive calls which run for up to three minutes each. To track the scripts progress it echoes messages placed at several points in the script.
In general the script runs smoothly. From time to time the script runs trough (the resulting file on the remote machine is correct) but the output to the terminal stops. Ctrl-C doesn't help, no prompt, just a frozen session. top in a separate session shows normal execution of the script.
My question: How keep the session alive?
local machine:
$ sw_vers
ProductName: Mac OS X
ProductVersion: 10.9
BuildVersion: 13A603
remote machine:
$ lsb_release -d
Description: Ubuntu 12.04.3 LTS
Personally, I would recommend using screen or tmux on the remote terminal for exactly this reason.
Those apps will allow the remote process to continue even if your local SSH session times out.
http://www.bangmoney.org/presentations/screen.html
http://tmux.sourceforge.net/
Start a screen on the remote machine and run your command from it:
screen -S largeScript
And then
./yourLargeScript.sh
Whenever your ssh session gets frozen, you can kill it with ~.
If you ssh again, you can grab back your screen by:
screen -dr largeScript
Make it log to a file instead (perhaps via syslog), and tail that file from wherever is convenient for you. This also helps detach the script so you can run it headless, from a cron job, etc. Also, if the log file has read access for others, they too can monitor it.

Resources