I am trying to issue commands to telnet. When I initially issue a simple connection command such as:
telnet localhost 9300
I am immediately connected which is fantastic but there are messages that instantly start printing in the shell every 1 second. These are expected responses from the program connected to that port. The issue is, how do I issue a second command when the shell won't stop logging data? I can't type in the shell when it is moving. Thanks for any help and sorry for the newbie type question.
You can type blind, just ignore that your line disappears in the stdout of telnet.
With copy-paste from a buffer, you can see your line, but it will still move out of the screen.
Another way is redirecting the output of telnet to a file.
When you want to see the telnet output, open a second window.
Window 1:
telnet localhost 9300 > telnet9300.out 2>&1
enter your commands here
Window 2:
# wait for telnet9300.out being created
tail -f telnet9300.out
# Press ^C when you have seen enough
Related
I have a application that launches xterm and dumps uart logs. I am able to see it launch and dump the logs in the GUI. However, Using a remote session I want the xterm output to be running as a background process somewhere so that I can switch back and forth within a single terminal.
Using GUI
Using remote terminal (SSH)
$ xterm
xterm: Xt error: Can't open display: :0
I tried to do something like, but failed to work -
alias xterm="/bin/bash -c"
I don't want to have X forwarding and launch a window on my local machine as well.
If you just need the logs, you most likely don't need an X server or xterm.
You can simply run the target command itself. From your screenshot it looks like the command might be telnet 127.0.0.1 <port_number>. You can find it from the script that your application launches, or with ps -ef when it's running. If it's an UART, then you can also use minicom or socat to connect directly to serial port without any extra programs. This way, you don't even need telnet.
You can combine this command with either screen or tmux so that it's running in the background and you can switch to it from any terminal or console. Just run screen with no arguments, then run the command on virtual screen. Detach with CTRL-a d, and your command will continue to run in the background ready for you to reconnect to it at any time with screen -r.
Moreover, screen can also connect to serial port directly so you get two for the price of one.
The thing with xterm is that it will not write the logs anywhere except in the graphics buffer, and even there it will be only as flashing pixels which is not suitable for any processing. If you insist on going that way, you have several options:
Change the script that application runs (might not be possible depending on your situation)
Replace /usr/bin/xterm with your dummy script that just runs bash instead of xterm, and redirects the output to a file (ugly, but you could probably avoid breaking other applications by changing PATH and putting it somewhere else). In your script, you can use bash's redirection features such as >, or pipe output to tee.
Start a VNC server in the background and set the DISPLAY environment variable when you run your application to the number of virtual screen. In this case, any windows from application will open on VNC virtual screen and you can connect to it as you please.
Use xvfb as a dummy X server and combine it with xterm logging, etc.
Solution 1: Fake xterm on X11-less systems
You can also create a wrapper script that replaces xterm with another function. Test this out on a laptop with X11:
$ function xterm {
echo "hello $#"
}
$ xterm world 1
hello world 1
$ export -f xterm
$ /bin/xterm # opens a new xterm session
$ xterm world 2 # commands executed in second terminal
hello world 2
This means that you've replaced the command xterm for a function in all of the child processes.
Now, if you already know that your script will work in a terminal without xterm, you could create a function that accepts all of the parameters and executes it. No need for complicated screen stuff or replacing /usr/bin/xterm.
Solution 2: Dump UART data for the winz
If you want to save all of the uart data into a file, this is easily fixed by creating a screen session and a log file. Below the command will create a session named myscreensessionname that listens on the serial connection /dev/ttyUSB0 and writes its data to /home/$USER/myscreensessionname.log.
$ screen -dmS myscreensessionname -L -Logfile /home/$USER \
/myscreensessionname.log /dev/ttyUSB0 115200
Note that if you're going to use multiple screen sessions, you might want to use serial ids instead of /dev/ttyUSB0. You can identify the connections with udevadmin as follows.
$ udevadm info --name=/dev/ttyUSB0 | grep 'by-id'
S: serial/by-id/usb-FTDI_TTL232R-3V3_FTBDBIQ7-if00-port0
E: DEVLINKS=/dev/serial/by-id/usb-FTDI_TTL232R-3V3_FTBDBIQ7-if00-port0 /dev/serial/by-path/pci-0000:00:14.0-usb-0:4.4.4.1:1.0-port0
Here, instead of /dev/ttyUSB0, I would make use of /dev/serial/by-id/usb-FTDI_TTL232R-3V3_FTBDBIQ7-if00-port0.
EDIT:
You can attach the screen session with the following command. Once in the screen session, press crtl+a, and press d to detach.
$ screen -Dr myscreensessionname
To view all of your screen sessions:
$ screen -list
There is a screen on:
2382.myscreensessionname (04/02/2021 10:32:07 PM) (Attached)
1 Socket in /run/screen/S-user.
I'm looking for some help with a script of mine. I'm new at bash scripting and I'm trying to start a service on a remote host with ssh and then capture all the output of this service to a file in my local host. The problem is that I also want to execute other commands after this one:
ssh $remotehost "./server $port" > logFile &
ssh $remotehost "nc -q 2 localhost $port < $payload"
Now, the first command starts an HTTP server that simply prints out any request that it receives, while the second command sends a request to such server.
Normally, if I were to execute the two commands on two separate shells I would get the first response on the terminal, but now I need it on the file.
I would like to have the server output all the requests on the log file, keeping a sort of open ssh connection to receive any new output of the server process.
I hope I made myself clear.
thank you for your help!
EDIT: Here's the output of the first command:
(Output is empty in the terminal... it waits for requests).
As you can see the commands doesn't return anything yet but it waits.
When I execute the second command on a new terminal (the request), the output of the first terminal is the following:
The request is displayed.
Now I would like to execute both commands in sequence in a bash script, sending the output of the first terminal (which is null until the second command is run) to a file so that ANY output, triggered by later issued requests, is sent to a file.
EDIT2: As of now, with the commands above, the server answers any requests but the output is not registered in the log file.
I start ncat (on Windows 10) with
ncat -vvlp 1234 -e code.exe
and then connect with a second instance of ncat to the first instance
(ncat 127.0.0.1 1234).
code.exe is a C program written by me that can be controlled over stdin.
Everything I send via the second ncat gets forwarded to the stdin of code.exe. I know this because I can see code.exe create a folder after sending the command to do so. But the output is not send back until code.exe closes itself.
Why is that — and how can I fix it?
Ok I found a solution to my problem. I disabled buffering of stdout by using
setbuf(stdout, NULL);
at the start of my C program.
I just wrote my first bash script to start some redis instances on a development server. While it is mostly working, the last opened redis instance is blocking the active terminal – though I have the trailing & sign and the other started instances aren't blocking the terminal. How would I push them all to the background?
Here's the script:
#!/bin/bash
REDIS=(6379 6380 6381 6382 6383 6390 6391 6392 6393)
for i in "${REDIS[#]}"
do
:
redis-server --port $i &
done
It sounds like your terminal is not actually blocked, your prompt just got overwritten. It's a purely cosmetic issue. Due to the way terminals work, bash doesn't know to redraw it so it looks like the command is in the foreground.
Run the script again, and blindly type lsEnter. You'll probably see that the shell responds as normal, even though you can't see the prompt.
You can alternatively just hit Enter to get bash to redraw the prompt.
I need to give the user ability to send/receive messages over the network (using netcat) while the connection is stablished (the user, in this case, is using nc as client). The problem is that I need to send a line before user starts interacting. My first attempt was:
echo 'my first line' | nc server port
The problem with this approach is that nc closes the connection when echo finishes its execution, so the user can't send commands via stdin because the shell is given back to him (and also the answer from server is not received because it delays some seconds to start answering and, as nc closes the connection, the answer is never received by the user).
I also tried grouping commands:
{ echo 'my first line'; cat -; } | nc server port
It works almost the way I need, but if server closes the connection, it will wait until I press <ENTER> to give me the shell again. I need to get the shell back when the server closes the connection (in this case, the client - my nc command - will never closes the connection, except if I press Ctrl+C).
I also tried named pipes, without success.
Do you have any tip on how to do it?
Note: I'm using openbsd-netcat.
You probably want to look into expect(1).
It is cat that wait for the 'enter'.
You may write a script execute after nc to kill the cat and it will return to shell automatically.
You can try this to see if it works for you.
perl -e "\$|=1;print \"my first line\\n\" ; while (<STDIN>) {print;}" | nc server port
This one should produce the behaviour you want:
echo "Here is your MOTD." | nc server port ; nc server port
I would suggest you use cat << EOF, but I think it will not work as you expect.
I don't know how you can send EOF when the connection is closed.