Is there a way to exit from gdb connnection without stopping / exiting running program ? I need that running program continues after gdb connection closed.
Is there a way to exit from gdb connnection without stopping / exiting running program ?
(gdb) help detach
Detach a process or file previously attached.
If a process, it is no longer traced, and it continues its execution. If
you were debugging a file, the file is closed and gdb no longer accesses it.
List of detach subcommands:
detach checkpoint -- Detach from a checkpoint (experimental)
detach inferiors -- Detach from inferior ID (or list of IDS)
Type "help detach" followed by detach subcommand name for full documentation.
Type "apropos word" to search for commands related to "word".
Command name abbreviations are allowed if unambiguous.
Since the accepted (only other) answer does not specifically address how to shut down gdb without stopping the program under test, I'm throwing my hat into the ring.
Option 1
Kill the server from the terminal in which it's running by holding Ctrl+c.
Option 2
Kill the gdb server and/or client from another terminal session.
$ ps -u username | grep gdb
667511 pts/6 00:00:00 gdbserver
667587 pts/7 00:00:00 gdbclient
$ kill 667587
$ kill 667511
These options are for a Linux environment. A similar approach (killing the process) would probably also work in Windows.
I am running some simulations on another machine via ssh. Here is what I do
ssh username#ipp.ip.ip.ip
Go to the right directory
cd path/to/folder
And then I just call my executable
.\myexecutable.exe
The issue is that every time the ssh disconnect, the simulations stops. How can I make sure the simulations doesn't stop on the other machine? Will I somehow receive potential error messages (assuming the code will crash) once I reconnect (ssh)?
You should launch a screen or tmux to create a terminal from which you can detach, leave running in the background and later reattach.
Further reading:
http://ss64.com/osx/screen.html
https://developer.apple.com/library/mac/documentation/Darwin/Reference/ManPages/man1/screen.1.html
You may also want to try out Byobu:
http://byobu.co
run your command as follows : nohup ./myexecutable.exe >nohup.out 2>&1 &
The & is to run the command in the background
The >nohup.out 2>&1 sends your stdout and stderr to nohup.out)
Note the '/' as opposed to '\' - which won't work on osx
For the past few years, I have been using an rsync one-liner to back up important folders on my Mac Mini desktop (OSX 10.9, 2.5 GHz i5, 4 GB RAM) to a FreeNAS box (0.7.2 Sabanda revision 5266, Pentium D 2.66 GHz, 822MiB RAM [reported by the system, I think there's 1 GB in there]). I am running an rsync daemon on the FreeNAS box. Recently, these transfers have been hanging indefinitely. I have done the usual Google-fu and am unable to identify the source of the problem or a solution.
The one-liner is:
rsync -rvOlt --exclude '.DS_Store' \
--exclude '.com.apple.timemachine.supported' \
--delete /Volumes/Storage/Music/Albums/ 192.168.1.100::albums
I have tried enabling -vvv and --progress, but there is no pattern that I can discern between what hangs and what doesn't. Heck, if I retry, the same file might hang at a different point during the transfer or not at all. A dry run (-n) does not always succeed either. The only "success" I've had is implementing a timeout (--timeout=10) and rerunning the command over and over. Eventually, I creep along, but with no guarantee of success and at a pace that is unacceptable. I've reached a point where I have one file that I can't get past.
The Mac Mini is connected to my router via 5 GHz. The FreeNAS box is wired into that same router on a 100 mbit port. When transfers are actually going, rsync --progress reports 2.5-4 MB/s. According to --progress, a hang is literally just that—no data transfer is occurring as far as I can tell.
I need help with both the diagnostics and the solution.
I was having the same problem. Removing -v didn't work for me. My use-case is slightly different in that I'm going from source (EXT4) to ExFAT. The issue for me was that rsync was attempting to preserve device files and permissions, which ExFAT doesn't support. I was using the -hrltDvaP switches. The -D and -a switches seemed to be my problem. The -a switch translates to -rlptgoD (no -H,-A,-X). The -p, -g, and -o switches seemed to be my root cause as rsync was barfing on one or all of those during runtime. Removing -a and specifying -Prltvc switches explicitly is working for me.
bkupcmd="nice -n$nicelevel /usr/bin/rsync -Prltvc --exclude-from=/var/tmp/ignorelist "
I've been running into the same thing again and again and it seems to help if you drop the -v option (which is annoying if you need that output).
Try using --whole-file/-W.
This command disables the rsync delta-transfer algorithm.
That is what worked for us (WSL to OSX)
our full sync flags were -avWPle
(e was because we were using ssh, and that has to be the last flag)
This happened to me when the remote device ran out of space. The error wouldn't show when --verbose option was used; turning that off yielded some STDERR output that explained that the remote device was out of space. When I freed some space, I was able to run rsync again with --verbose and everything went fine.
I am using openSUSE 13.2 Linux, rsync version 3.1.1-2.4.1.x86_64, and I experienced similar problems, doing an rsync between my laptop and an external hard disk, with the destination device definitively having enough free space.
I thought I got an improvement omitting option -v, but after 10 minutes it was hanging again: strace said:
select(5, [], [4], [], {60, 0}) = 0 (Timeout)
And with "iotop" I counld see confirm that the rsync processes did no significant disk IO any more.
Neither removing the -v option nor limiting the bandwidth using --bwlimit fixed the problem.
Just had a similar problem while doing rsync from harddisk to a FAT32 USB drive. rsync froze already in less than a second in my case and did not react at all after that ... left it with CTRL+C.
Found out that the problem was a combination of usage of hardlinks on the harddisk and having FAT32 filesystem on the USB drive, which does not support hardlinks.
Formatting the USB drive with ext4 solved the problem for me.
In my situation rsync was not actually failing.
I have regular server backups which transfers large files over 500GB+ and have --append-verify or --checkusm over ssh parameters specified.
What I have found upon analysis is that once the client side completes it's file checks then the server side checks start. Which means while the server is doing it's checks the client side will appear hanged and frozen - run htop on the server to rsync working away.
This is likely a non issue if rsync is run in deamon mode on the server and using the rsync protocol instead of ssh for transfers.
On related note, this very LONG wait would trigger SSH timeout and a rsync: connection unexpectedly closed (254 bytes received so far) [sender] error message, sollution is to add ClientAliveInterval 120 and ClientAliveCountMax 720 to /etc/ssh/sshd_config.
I've seen this quite often on 3.0.9 on a directory with hardlinks, but it also happened on 3.1.3.
There is a nice analysis in Debian bug 820916: when its internal sockets are congested with errors, rsync could go into a deadlock.
This might have been fixed in a 3.2 release just a few days ago (Jun 2020):
Avoid a hang when an overabundance of messages clogs up all the I/O buffers.
The only good workaround I can think of is, if the problem is not persistent, then put timeout in front of it: timeout rsync <args> <source> <destination>, then retry. If it is persistent for you, you're the lucky one who can debug it :D
It also happens when the user on target machine has not write permissions on target folder.
You can try giving write permission to others target folder:
sudo chmod -R o+w /path/to/target-folder
In my case, it was the IPC (Intrusion Protection Component) in our firewall. It sees all the TCP SYN packets as a flood attack and kills the connection. I left a rsync over NFS session open and turned off the IPC for the servers firewall rule and it starting working again right away.
rsync -ravh /source /destination
When it happened I was not able to kill the rsync session. It locked up the NFS mount and I would have to reboot the client machine to get it to work again. The strange thing is it would copy some files over then all of a sudden stop. It always seemed to stop on the same file. So I was looking for file issues, permission issues, TCP offloading issues, tried removing the -v in the rsync call. If you are having this issue at least in my case it even happened with a simple.
cp -rp /source /destination
So I knew then to start looking at other factors. So if you have any sort of intrusion protection on a firewall or router between the servers you can try turning that off temporarily to see if it solves your issue as well.
Most likely not "your" problem, but I stumbled upon this question when I was researching a similar behavior:
I'm observing "hanging" when the target site has too much io load. e.G. on one of my small business servers, when someone is resyncing his IMAP account and downloading large batchs of data and a backup job runs that writes his data.
In this situation I notice a steep drop in performance for rsync. Noticeable in a high load value in top on the target machine, even though CPU and Mem are fine.
Waiting for the process to finish has helped every time or interrupting and attempting the rsync at a later time again.
I was having the same problem and it was because I was running out of memory during the rsync. Created a swap file and problem solved.
Had rsync hanging issue on Ubuntu 16. None of the options above helped. The problem was in the source drive (external SSD) which suddenly became faulty. I tried several disk checks, but all of them stuck. Ended up rebooting the system and disk suddenly became accessible again.
Holger Ohmacht aka h8ohmh / 8ohmh:
The problem lies in the filesystem buffer / usage of the interworking of harddisk/hw so far as I could investigate.
Temporal solution for local drives (eg. USB3<->HD) : A script which is polling the changing disk space. If no changing free disk space then rsync is stalled and has to be restarted
cmd="rsync -aW --progress --stats --preallocate --super \
<here your source dir> \
<here your dest dir>"
eval "$cmd" &
rm ./ndf.txt
rm ./odf.txt
while [[ 0 == 0 ]]; do
df > ./ndf.txt
cmp ./odf.txt ./ndf.txt
res="$?"
echo "$res"
if [[ $res == 0 ]]; then
echo "###########################################"
ls -al "./ndf.txt"
ls -al "./odf.txt"
killall rsync
eval "$cmd" &
else
cp ./ndf.txt ./odf.txt
fi
sleep 60
done
Change <source dir> etc to your paths!
In my case it is always stalling by usage of rsync's --preallocate option (normally because of better disk performance and rescueing continuous blocks), so as long as the disk and filesystem drivers not reworked there just this solution
I know these kinds of questions have been asked for years, and the answer to them are often Screen or tmux.
I surely will use screen at beginning if I know I will leave the session for a long time, or the network is too bad to maintain a reliable connection.
The main problem is when I start some session and find it must last long later, or the connection just lost accidentally. In the later case, often when I start another session immediately, I can find the previous processes are not killed at that time, but I just have no way to reconnect to their terminal.
So I wonder if it is possible to prevent normal processes from killed even long time after accidentally disconnected SSH session. And the most important is I can reconnect to their terminals with out start them in Screen in advance.
If not, is is possible to move a already started bare ssh session into a new Screen session for later reconnect?
I don't believe it's possible without something like screen. Once your pseudo-TTY is lost I'm almost certain it can't be recovered from a different shell (at least not without some narly hacks).
As far as adding an existing process to a new screen I think that is possible. Try the instructions here:
http://monkeypatch.me/blog/move-a-running-process-to-a-new-screen-shell.html
The first thing to do is to suspend the process. In my case, Irssi can be suspended by typing Ctrl + Z.
Secondly, resume the process in background:
$ bg
Now, we will detach the process from its parent (the shell). So, when the parent process will be terminated, the child (Irssi) will be able to continue. For this, we use the disown builtin:
$ disown irssi
Launch a screen session:
$ screen
As we are in a screen session, we will retrieve the irssi process. To do so, we use the reptyr command which take a pid:
$ reptyr
To avoid the tedious pid research, we can use the pgrep command:
$ reptyr $(pgrep irssi)
Now the process is in a screen shell, we can safely detach our session and no longer worry about killing our X server or close our ssh connection.
You'll need reptyr for this.
OPTION 2:
I suspect you may be trying to solve the wrong problem. If your SSH connection is dropping, why not address that? You can set SSH to be incredibly tolerant of timeouts and disconnects by tweaking your connection settings.
On your client, in $HOME/.ssh/config add:
ServerAliveInterval 60
ServerAliveCountMax 5
Now your sessions won't timeout even if the server doesn't respond for 5 minutes.
Use ssh-tmux instead of tmux:
function ssh-tmux(){
if ! command -v autossh &> /dev/null; then echo "Install autossh"; fi
autossh -M 0 $* -t 'byobu || {echo "Install byobu-tmux on server..."} && bash'
}
I worked on a text file using nano and I got disconnected. After I logged in I saw the nano process from the previous session was still running, but I couldn't switch to that nano instance. So, I killed the nano process and then it created file named filename.save. Which had my changes from the first session.
I made a pseudo terminal with method described here: http://lists.apple.com/archives/student-dev/2005/Mar/msg00019.html
The terminal itself worked well. Anyway the problem is terminal cannot being switched to child process. For an example, I launched bash with NSTask, and if I execute ftp within the bash, it stops automatically.
ftp
ftp
ftp>
[1]+ Stopped ftp
bash-3.2$
And if I try to continue the ftp with fg, it terminates quietly. (I checked this with Activity Monitor)
fg
fg
ftp
bash-3.2$
fg
fg
bash: fg: current: no such job
bash-3.2$
I think it needs some more infrastructure (which completes pseudo terminal) to switch control to child process. What's required to do this?
I could finally do this by creating a pty device. To make a program behave like "Terminal", it must be executed in an interactive terminal, and that needs pseudo-terminal device.
Unfortunately, AFAIK, NSTask does not support any pty ability, so I had to get down to BSD layer.
Here's my current implementation: https://github.com/eonil/PseudoTeletypewriter.Swift
sudo is working well, and I believe ssh should also work.
Have a look at the source code of MFTask and PseudoTTY.app (which works on Mac OS X 10.6).
See: http://www.cocoadev.com/index.pl?NSTask
For a pty command line tool see here.