How to Quit TFTP script - bash

I have a tftp script here that when run it just hangs and brings me to a blank line (which tells me it's hanging). I can quit the script by Ctrl+C...
#!/bin/bash
hostname=$1;
filename=$2;
tftp <</dev/null
mode binary
get $hostname:$filename
quit
I have also tried to add EOF at the end of the script, but that doesn't work either.
Here is my command line...
$ ./tftpShell.sh host1 myFileName >/home/aayerd200/tftpoutput.txt 2>/home/aayerd200/tftperror.log
So when I run the script, it just leaves me on a blank line. However, it does actually do the work it should with get, I do get the file I want.
Of course host1 and myFileName are actual fields that I replaced here for security.
How can I stop this script? I believe it is just tftp hanging upon $ ps -u aayerd200, or when run by php $ ps -u daemon

You have /dev/null as a here document "delimiter" Try some random set of characters like EOF that have no meaning to the shell. And terminate the here doc
tftp <<-EOF
mode binary
get $hostname:$filename
quit
EOF

Okay so I just made this a background process by appending & to the end of the command. Then I ran $ echo $! for the PID. Then I ran $ kill PID.
That was my solution to this, for now at least.

Related

Capturing ssh output in bash script while backgrounding connection

I have a loop that will connect to a server via ssh to execute a command. I want to save the output of that command.
o=$(ssh $s "$#")
This works fine. I can then do what I need with the output. However I have a lot of servers to run this against and I'm trying to speed up the process by backgrounding the ssh connection, basically to do all of the requests at once. If I wasn't saving the output I could do something like
ssh $s "$#" &
and this works fine
I haven't been able to get the correct combination to do both.
o=$(ssh $s "$#")&
This doesn't give me any output. Other combinations I've tried appear to try to execute the output. Suggestions?
Thanks!
A process going to the background gets its own copies of the file descriptors. The stdout (o=..) will not be available in the calling process. However, you can bind the stdout to a file and access the file.
ssh $s "$#" >outfile &
wait
o=$(cat outfile)
If you don't like files, you could also use named pipes. This way the 'wait' is done by the 'cat' command. The pipe can be reused and consumes no space on the disk.
mkfifo testpipe
ssh $s "$#" >testpipe &
o=$(cat testpipe)
I would just use a temporary file. You can't set a variable in a background process and access it from the shell that started it.
ssh "$s" "$#" > output.txt & ssh_pid=$!
...
wait "$ssh_pid"
o=$(<output.txt)

Send echo command to an external xTerm

I have a bash script, and I want to be able to keep a log in an xterm, and be able to send echo to it anytime.
How would I do this?
Check the GPG_TTY variable in your xterm session. It should have the value similar to
GPG_TTY=/dev/pts/2
This method should be available for terminals that support GNU Pinentry.
Another option to determine the current terminal name is to use
readlink /proc/self/fd/0
The last method applies only to Linux
Now if your bash script implements a command
echo "Hello, world!" > /dev/pts/2
This line should appear on the xterm screen.
I managed to make a console by running an xterm with a while loop clearing the screen, reading the contents of the log file, pauseing for a second, then looping again. Here was the command:
xterm -T Console -e "while true: do cls && cat ${0}-LOG.txt && sleep 1; done"
Then to send something to the console:
echo -e "\e[91;1mTest" >> ${0}-LOG.txt
And the console will update each second.

Why isn't this command returning to shell after &?

In Ubuntu 14.04, I created the following bash script:
flock -nx "$1" xdg-open "$1" &
The idea is to lock the file specified in $1 (flock), then open it in my usual editor (xdg-open), and finally return to prompt, so I can open other files in sequence (&).
However, the & isn't working as expected. I need to press Enter to make the shell prompt appear again. In simpler constructs, such as
gedit test.txt &
it works as it should, returning the prompt immediately. I think it has to do with the existence of two commands in the first line. What am I doing wrong, please?
EDIT
The prompt is actually there, but it is somehow "hidden". If I issue the command
sudo ./edit error.php
it replies with
Warning: unknown mime-type for "error.php" -- using "application/octet-stream"
Error: no "view" mailcap rules found for type "application/octet-stream"
Opening "error.php" with Geany (application/x-php)
__
The errors above are not related to the question. But instead of __ I see nothing. I know the prompt is there because I can issue other commands, like ls, and they work. But the question remains: WHY the prompt is hidden? And how can I make it show normally?
Why isn't this command returning to shell after &?
It is.
You're running a command in the background. The shell prints a new prompt as soon as the command is launched, without waiting for it to finish.
According to your latest comment, the background command is printing some message to your screen. A simple example of the same thing:
$ echo hello &
$ hello
The cursor is left at the beginning of the line after the $ hello.
As far as the shell is concerned, it's printed a prompt and is waiting a new command. It doesn't know or care that a background process has messed up your display.
One solution is to redirect the command's output to somewhere other than your screen, either to a file or to /dev/null. If it's an error message, you'll probably have to redirect both stdout and `stderr.
flock -nx "$1" xdg-open "$1" >/dev/null 2>&1 &
(This assumes you don't care about the content of the message.)
Another option, pointed out in a comment by alvits, is to sleep for a second or so after executing the command, so the message appears followed by the next shell prompt. The sleep command is executed in the foreground, delaying the printing of the next prompt. A simple example:
$ echo hello & sleep 1
hello
[1] + Done echo hello
$
or for your example:
flock -nx "$1" xdg-open "$1" & sleep 1
This assumes that the error message is printed in the first second. That's probably a valid assumption for you example, but it might not be in general.
I don't think the command is doing what you think it does.
Have you tried to run it twice to see if the lock cannot be obtained the second time.
Well, if you do it, you will see that it doesn't fail because xdg-open is forking to exec the editor. Also if it fails you expect some indication.
You should use something like this
flock -nx "$1" -c "gedit '$1' &" || { echo "ERROR"; exit 1; }

plink won't return to command prompt

I try to execute a bash script via plink. Script looks something like this:
echo "# Starting process..."
./bin/process "process.cfg" &
disown %1
echo "# Done!"
When i execute this script in a terminal on linux, everything works fine. After the "Done!" line I get a command prompt (as expected).
Now when I run this script via plink, the output stops afyer the "Done!" line, but plink won't return to the command prompt and "hangs" until +c.
The script is placed in a file and given to plink with the -m parameter
I tried addind 'logout', 'exit', 'set -e' at the end of the script, but it doesn't help. Also adding -batch, -T or -N to the plink command brought no success.
Any ideas on how to fix this?
Ok, it seems I had to detach stdout/err from the terminal.
In a normal terminal this wouldn't matter ofcourse, but plink remained in a "busy" state because of this.
So, inside my bash script (which executed the command) I had to change:
./bin/process "process.cfg" &
to:
./bin/process "process.cfg" /dev/null 2>&1 &
plink now returns the correct "finished" state at the end of the bash script.
plink.exe -P PORT_NUM -v USERNAME#HOST_IP -pw PASSWD "COMMAND >/dev/null &"
& would move your process to the background
> /dev/null allows your command run silently by getting stdout/stderr to output to a dummy null device
note: the shell command is wrapped in "double quotations"
Plink has a -batch parameter which disable all interactive prompts. It may be what you need here to avoid hanging until ctrl-C.

Can I change the name of `nohup.out`?

When I run nohup some_command &, the output goes to nohup.out; man nohup says to look at info nohup which in turn says:
If standard output is a terminal, the
command's standard output is appended
to the file 'nohup.out'; if that
cannot be written to, it is appended
to the file '$HOME/nohup.out'; and if
that cannot be written to, the command
is not run.
But if I already have one command using nohup with output going to /nohup.out and I want to run another, nohup command, can I redirect the output to nohup2.out?
nohup some_command &> nohup2.out &
and voila.
Older syntax for Bash version < 4:
nohup some_command > nohup2.out 2>&1 &
For some reason, the above answer did not work for me; I did not return to the command prompt after running it as I expected with the trailing &. Instead, I simply tried with
nohup some_command > nohup2.out&
and it works just as I want it to. Leaving this here in case someone else is in the same situation. Running Bash 4.3.8 for reference.
Above methods will remove your output file data whenever you run above nohup command.
To Append output in user defined file you can use >> in nohup command.
nohup your_command >> filename.out &
This command will append all output in your file without removing old data.
As the file handlers points to i-nodes (which are stored independently from file names) on Linux/Unix systems You can rename the default nohup.out to any other filename any time after starting nohup something&. So also one could do the following:
$ nohup something&
$ mv nohup.out nohup2.out
$ nohup something2&
Now something adds lines to nohup2.out and something2 to nohup.out.
my start.sh file:
#/bin/bash
nohup forever -c php artisan your:command >>storage/logs/yourcommand.log 2>&1 &
There is one important thing only. FIRST COMMAND MUST BE "nohup", second command must be "forever" and "-c" parameter is forever's param, "2>&1 &" area is for "nohup". After running this line then you can logout from your terminal, relogin and run "forever restartall" voilaa... You can restart and you can be sure that if script halts then forever will restart it.
I <3 forever

Resources