I am remotely working on a server that automatically logs me out after 5 minutes of inactivity. Here's the message that it usually provides when it does so:
Read from remote host XXXXXXX: Operation timed out
I typically have several sessions open, which I use at roughly 30-minute intervals, so I wonder what I could do to avoid getting disconnected. I've already tried:
[a] hiring a monkey to hit some keys before the session logs me out
[b] running the top command
[c] threatening the server administrator :)
Any other suggestions? Thanks.
This has been answered on StackOverFlow - I add the link here for people that don't want to go to a third party forum when they search for this answer (as I did):
https://stackoverflow.com/questions/13390710/mac-terminals-how-to-keep-alive
Add to ~/.ssh/config
ServerAliveInterval 30
or start your ssh session using:
ssh -o ServerAliveInterval 30 username#hostname
And BTW: the answer is not specific to Mac.
You might consider using vi or more to edit a file.
Related
Maybe anyone faced a similar problem. We have a kubeapi proxy which impersonates users using sso.
Kubectl tool works just fine with any commands unless you do tail -f
I do see that app data is coming back every 1-5 seconds, but to output it takes ~ 45 seconds.
We use http/server from standard go packages and our proxy is based of https://github.com/ericchiang/kube-oidc/issues
TCP Dump attached. Thanks
What does it mean when the terminal throw this error and how to solve it?
packet_write_wait: Connection to xxx.xxx.xxx.xxx: Broken pipe
It was just happen today. After it work normally for year.
My terminal keep disconnect at a certain time. I had already search on google but most of it is about "Write failed: Broken pipe."
Which I already solved that for years. I just found this new annoyed problems today
I experienced this problem as well and spent a few days trying to bisect it.
Like specified, playing with SSH KeepAlive parameters (ClientAliveInterval, ClientAliveCountMax, ServerAliveInterval and ServerAliveCountMax) or kernel TCP parameters (TCPKeepAlive on/off) does not solve the problem.
After playing with USB to Ethernet drivers and tcpdump, I realized the issue was due to the kernel 4.8 I was using. I switched the source (sending side) to 4.4 LTS and the problem disappeared (rsync via ssh and scp were working nicely again). The destination side can remain on 4.8 if you want, in my use case this was working (tested).
On the technical side, we can narrow a little bit the issue thanks to the wireshark dump below I made. We can see the TCP channel of the SSHv2 protocol is being reset (RST flag of TCP set to 1) causing the connection to abort. I don't know the cause of that RST yet. I need to make some bisection from 4.8.1 to 4.8.11 for that.
I'm not saying your problem is specifically due to the kernel 4.8, but wrt. the date you posted your question/message, there are high chances you are currently using a kernel more recent than 4.4.
If that is an ssh connection, then you might want to make sure you send a keepalive message to the server.
ServerAliveInterval seems to be the most common strategy to keep a connection alive. To prevent the broken pipe problem, here is the ssh config I useed in my .ssh/ssh_config file (may be named as /etc/ssh/config or sshd_config):
Host myhostshortcut
HostName myhost.com
User barthelemy
ServerAliveInterval 60
ServerAliveCountMax 10
Connect through another wifi.
I don't know why or how it works, but it does.
The original poster sthapaun already mentioned this solution in a comment, but I want to add that the solution works for me, too.
I am trying to study network traffic in my lab. I have 31 computers and would like to use all of them to simulate different traffic conditions. However, instead of logging into all 31 and running a command one by one on each machine, I would like to know if there is a shortcut.
My scenario: I want to investigate the affect on bandwidth when x number of computers are transmitting with a server. I have one server computer, and 30 available clients. Testing with two boxes is easy:
client: ./iperf -c -p
server: ./iperf -s -p
I'm trying to avoid running that client command on 30 computers at once. However, I don't know if iperf allows you to specify a CLIENT ip address...I was hoping I could just write a script and execute all 30 machines from one physical workstation.
Is this possible?
I just started using iperf so I may be completely wrong here, but I think the tests need to be initiated by the client... However this is not really an iperf related issue.
If you have ssh access to all the machines, you could setup a cron job that initiates the test, at different times, which could let you prepare scenarios with different loads and different users.
You could use clusterssh to manage a single window opening connections to multiple machines.
You can find it here on sourceforge
Hope it helps, it's kind of a late response.
I use SSH in a terminal window many times during a day.
I remember reading about a way to reuse a single connection so that the TCP and SSH handshaking don't have to happen every time I establish another request to the same host.
Can someone point me to a link or describe how to establish a shared ssh connection so that subsequent connections to the same host will connect quickly?
Thanks.
Answering my own question. Improving SSH (OpenSSH) connection speed with shared connections describes using the "ControlPath" configuration setting.
UPDATE: For connections that are opened and closed often add a setting like ControlPersist 4h to your ~/.ssh/config. See this post about SSH productivity tips.
If you want to keep the terminal opened, the easiest way is producing I/O ("tail -f" or "while [ -d . ]; cat /etc/loadavg; sleep 3; done").
If you want to improve the connection handshake one way I use is adding "UseDNS no" on your sshd_config.
I'm trying to use Putty 0.60 to log in to an OpenSSH 5.3 server. Connections with openssh from another Linux server are possible, but Putty fails. Putty's event log tells me "software caused connection abort" right after the DH key exchange, the server log doesn't report anything (set to INFO). I analyzed the traffic with Wireshark and got a whole bunch of "TCP retransmission" and "TCP DUP ACK" after said key exchange.
Sometimes I was able to log in, but at some point (usually < 2 min.) the connection froze without any logged messages. Sadly, I didn't capture a trace.
The server is my own (Funtoo with genkernel and gentoo-sources 2.6.34), so I may tweak it, but I'd still like to know what causes the error. Any suggestions? Thank you!
Ok, that was weird.
The problems cause was a network BIOS setting: a specified static IP and NIC = shared (Broadcom Extreme II) - system in question is a Dell Blade. By these settings, I somehow ended up with multiple MAC addresses for the IP - which killed my SSH connections. I honestly hope this helps somebody else...