Understanding tty + bash - bash

I see that I can use one bash session to print text in another as follows
echo './myscript' > /dev/pts/0 # assuming session 2 is using this tty
# or
echo './myscript' > /proc/1500/fd/0 # assuming session 2's pid is 1500
But why does the text ./myscript only print and not execute? Is there anything that I can do to execute my script this way?
(I know that this will attract a lot of criticism which will perhaps fill any answers that follow with "DON'T DO THAT!" but the real reason I wish to do this is to automatically supply a password to sshfs. I'm working with a local WDMyCloud system, and it deletes my .authorized_keys file every night when I turn off the power.)

why does the text ./myscript only print and not execute?
Input and output are two different things.
Writing to a terminal puts data on the screen. Reading from a terminal reads input from the keyboard. In no way does writing to the terminal simulate keyboard input.
There's no inherent coupling between input and output, and the fact that keys you press show up on screen at all is a conscious design decision: the shell simply reads a key, and then both appends it to its internal command buffer, and writes a copy to the screen.
This is purely for your benefit so you can see what you're typing, and not because the shell in any way cares what's on the screen. Since it doesn't, writing more stuff to screen has no effect on what the shell executes.
Is there anything that I can do to execute my script this way?
Not by writing to a terminal, no.

Here is an example using a FIFO:
#!/usr/bin/bash
FIFO="$(mktemp)"
rm -fv "$FIFO"
mkfifo "$FIFO"
( echo testing123 > "$FIFO" ) &
cat "$FIFO" | sshfs -o password_stdin testing#localhost:/tmp $HOME/tmp
How you store the password and send it to the FIFO is up to you

You can accomplish what you want by using an ioctl system call:
The ioctl() system call manipulates the underlying device parameters of special files. In particular, many operating characteristics of character special files (e.g., terminals) may be controlled with ioctl() requests.
For the 'request' argument of this system call, you'll want TIOCSTI, which is defined as 0x5412 in my header files. (grep -r TIOCSTI /usr/include to verify for your environment.)
I accomplish this as follows in ruby:
fd = IO.sysopen("/proc/#{$$}/fd/0", 'wb')
io = IO.new(fd, 'wb')
"puts 9 * 16\n".chars.each { |c| io.ioctl 0x5412, c };

Related

Send Ctrl + d to a server via bash

I'm working on a script, that requires you press control + d when you complete your entries. I'd like to send this command so I can just script my work rather than having to redo my work.
You're probably talking about the "end of transmission" delimiter which is used to indicate the end of user input. If that's the case then you can always pipe data into your script. That is, instead of this:
$ test_script.sh
My input!
^D
You'd write that data to a file:
$ cat > input
My input!
^D
Then pipe that into the script:
$ test_script.sh < input
No ^D is required because once that file is fully read the script is signalled accordingly. The < shell operator switches STDIN to read from a file instead of the terminal. Likewise, > can be used to capture the output of a program and save it to a file, as done in the second step here, though you can use any tool you'd like to create or edit that input file.
This works with pretty much any scripting language, from Python, Perl, Ruby to Node.js as well as bash and other shells.

Lock a file in bash using flock and lockfile

i spent the better part of the day looking for a solution to this problem and i think i am nearing the brink ... What i need to do in bash is: write 1 script that will periodicly read your inputs and write them into a file and second script that will periodicly print out the complete file BUT only when something new gets written in, meaning it will never write 2 same outputs 1 after another. 2 scripts need to comunicate by the means of a lock, meaning script 1 will lock a file so that script 2 cant print anything out of it, then script 1 will write something new into that file and unlock it ( and then script 2 can print updated file ).
The only hints we got was the usage of flock and lockfile - didnt get any hints on how to use them, exept that problem MUST be solved by flock or lockfile.
edit: When i said i was looking for a solution i ment i tried every single combination of flock with those flags and i just couldnt get it to work.
I will write pseudo code of what i want to do. A thing to note here is that this pseudocode is basicly the same as it is done in C .. its so simple, i dont know why everything has to be so complicated in bash.
script 1:
place a lock on file text.txt ( no one else can read it or write to it)
read input
place that input into file ( not deleting previous text )
remove lock on file text.txt
repeat
script 2:
print out complete text.txt ( but only if it is not locked, if it is locked obviously you cant)
repeat
And since script 2 is repeating all the time, it should print the complete text.txt ONLY when something new was writen to it.
I have about 100 other commands like flock that i have to learn in a very short time and i spent 1 day only for 1 of those commands. It would be kind of you to at least give me a hint. As for man page ...
I tried to do something like flock -x text.txt -c read > text.txt, tried every other combination also, but nothing works. It takes only 1 command, wont accept arguments. I dont even know why there is an option for command. I just want it to place a lock on file, write into it and then unlock it. In c it only takes flock("text.txt", ..).
Let's look at what this does:
flock -x text.txt -c read > text.txt
First, it opens test.txt for write (and truncates all contents) -- before doing anything else, including calling flock!
Second, it tells flock to get an exclusive lock on the file and run the command read.
However, read is a shell builtin, not an external command -- so it can't be called by a non-shell process at all, mooting any effect that it might otherwise have had.
Now, let's try using flock the way the man page suggests using it:
{
flock -x 3 # grab a lock on file descriptor #3
printf "Input to add to file: " # Prompt user
read -r new_input # Read input from user
printf '%s\n' "$new_input" >&3 # Write new content to the FD
} 3>>text.txt # do all this with FD 3 open to text.txt
...and, on the read end:
{
flock -s 3 # wait for a read lock
cat <&3 # read contents of the file from FD 3
} 3<text.txt # all of this with text.txt open to FD 3
You'll notice some differences from what you were trying before:
The file descriptor used to grab the lock is in append mode (when writing to the end), or in read mode (when reading), so you aren't overwriting the file before you even grab the lock.
We're running the read command (which, again, is a shell builtin, and so can only be run directly by the shell) by the shell directly, rather than telling the flock command to invoke it via the execve syscall (which is, again, impossible).

Bash: file descriptors

I am a Bash beginner but I am trying to learn this tool to have a job in computers one of these days.
I am trying to teach myself about file descriptors now. Let me share some of my experiments:
#!/bin/bash
# Some dummy multi-line content
read -d '' colours <<- 'EOF'
red
green
blue
EOF
# File descriptor 3 produces colours
exec 3< <(echo "$colours")
# File descriptor 4 filters colours
exec 4> >(grep --color=never green)
# File descriptor 5 is an unlimited supply of violet
exec 5< <(yes violet)
echo Reading colours from file descriptor 3...
cat <&3
echo ... done.
echo Reading colours from file descriptor 3 again...
cat <&3
echo ... done.
echo Filtering colours through file descriptor 4...
echo "$colours" >&4
echo ... done. # Race condition?
echo Dipping into some violet...
head <&5
echo ... done.
echo Dipping into some more violet...
head <&5
echo ... done.
Some questions spring to mind as I see the output coming from the above:
fd3 seems to get "depleted" after "consumption", is it also automatically closed after first use?
how is fd3 different from a named pipe? (something I have looked at already)
when exactly does the command yes start executing? upon fd declaration? later?
does yes stop (CTRL-Z or other) and restart when more violet is needed?
how can I get the PID of yes?
can I get a list of "active" fds?
very interesting race condition on filtering through fd4, can it be avoided?
will yes only stop when I exec 5>&-?
does it matter whether I close with >&- or <&-?
I'll stop here, for now.
Thanks!
PS: partial (numbered) answers are fine.. I'll put together the different bits and pieces myself.. (although a comprehensive answer from a single person would be impressive!)
fd3 seems to get "depleted" after "consumption", is it also automatically closed after first use?
No, it is not closed. This is due to the way exec works. In the mode in which you have used exec (without arguments), its function is to arrange the shell's own file descriptors as requested by the I/O redirections specified to itself, and then leave them that way until the script terminated or they are changed again later.
Later, cat receives a copy of this file descriptor 3 on its standard input (file descriptor 0). cat's standard input is implicitly closed when cat exits (or perhaps, though unlikely, cat closes it before it exists, but that doesn't matter). The original copy of this file, which is the shell's file descriptor 3, remains. Although the actual file has reached EOF and nothing further will be read from it.
how is fd3 different from a named pipe? (something I have looked at already)
The shell's <(some command) syntax (which is not standard bourne shell syntax and I believe is only available in zsh and bash, by the way) might actually be implemented using named pipes. It probably isn't under Linux because there's a better way (using /dev/fd), but it probably is on other operating systems.
So in that sense, this syntax may or may not be a helper for setting up named pipes.
when exactly does the command yes start executing? upon fd declaration? later?
As soon as the <(yes violet) construct is evaluated (which happens when the exec 5< <(yes violet) is evaluated).
does yes stop (CTRL-Z or other) and restart when more violet is needed?
No, it does not stop. However, it will block soon enough when it starts producing more output than anything reading the other end of the pipe is consuming. In other words, the pipe buffer will become full.
how can I get the PID of yes?
Good question! $! appears to contain it immediately after yes is executed. However there seems to be an intermediate subshell and you actually get the pid of that subshell. Try <(exec yes violet) to avoid the intermediate process.
can I get a list of "active" fds?
Not from the shell. But if you're using an operating system like Linux that has /proc, you can just consult /proc/self/fd.
very interesting race condition on filtering through fd4, can it be avoided?
To avoid it, you presumably want to wait for the grep process to complete before proceeding through the script. If you obtain the process ID of that process (as above), I think you should be able to wait for it.
will yes only stop when I exec 5>&-?
Yes. What will happen then is that yes will continue to try to produce output forever but when the other end of the file descriptor is closed it will either get a write error (EPIPE), or a signal (SIGPIPE) which is fatal by default.
does it matter whether I close with >&- or <&-?
No. Both syntaxes are available for consistency's sake.

Is it possible to start a program from the command line with input from a file without terminating the program?

I have a program that users can run using the command line. Once running, it receives and processes commands from the keyboard. It's possible to start the program with input from disk like so: $ ./program < startScript.sh. However, the program quits as soon as the script finishes running. Is there a way to start the program with input from disk using < that does not quit the program and allows additional keyboard input to be read?
(cat foo.txt; cat) | ./program
I.e., create a subshell (that's what the parentheses do) which first outputs the contents of foo.txt and after that just copies whatever the user types. Then, take the combined output of this subshell and pipe it into stdin of your program.
Note that this also works for other combinations. Say you want to start a program that always asks the same questions. The best approach would be to use "expect" and make sure the questions didn't change, but for a quick workaround, you can also do something like this:
(echo y; echo $file; echo n) | whatever
Use system("pause")(in bash it's just "pause") in your program so that it does not exit immediatly.
There are alternatives such as
dummy read
infinite loop
sigsuspend
many more
Why not try something like this
BODY=`./startScript.sh`
if [ -n "$BODY" ]
then cat "$BODY" |./program
fi
That depends on how the program is coded. This cannot be achieved from writing code in startScript.sh, if that is what you're trying to achieve.
What you could do is write a callingScript.sh that asks for the input first and then calls the program < startScript.sh.

How to read data and read user response to each line of data both from stdin

Using bash I want to read over a list of lines and ask the user if the script should process each line as it is read. Since both the lines and the user's response come from stdin how does one coordinate the file handles? After much searching and trial & error I came up with the example
exec 4<&0
seq 1 10 | while read number
do
read -u 4 -p "$number?" confirmation
echo "$number $confirmation"
done
Here we are using exec to reopen stdin on file handle 4, reading the sequence of numbers from the piped stdin, and getting the user's response on file handle 4. This seems like too much work. Is this the correct way of solving this problem? If not, what is the better way? Thanks.
You could just force read to take its input from the terminal, instead of the more abstract standard input:
while read number
do
< /dev/tty read -p "$number?" confirmation
echo "$number $confirmation"
done
The drawback is that you can't automate acceptance (by reading from a pipe connected to yes, for example).
Yes, using an additional file descriptor is a right way to solve this problem. Pipes can only connect one command's standard output (file descriptor 1) to another command's standard input (file descriptor 1). So when you're parsing the output of a command, if you need to obtain input from some other source, that other source has to be given by a file name or a file descriptor.
I would write this a little differently, making the redirection local to the loop, but it isn't a big deal:
seq 1 10 | while read number
do
read -u 4 -p "$number?" confirmation
echo "$number $confirmation"
done 4<&0
With a shell other than bash, in the absence of a -u option to read, you can use a redirection:
printf "%s? " "$number"; read confirmation <&4
You may be interested in other examples of using file descriptor reassignment.
Another method, as pointed out by chepner, is to read from a named file, namely /dev/tty, which is the terminal that the program is running in. This makes for a simpler script but has the drawback that you can't easily feed confirmation data to the script manually.
For your application, killmatching, two passes is totally the right way to go.
In the first pass you can read all the matching processes into an array. The number will be small (dozens typically, tens of thousands at most) so there are no efficiency issues. The code will look something like
set -A candidates
ps | grep | while read thing do candidates+=("$thing"); done
(Syntactic details may be wrong; my bash is rusty.)
The second pass will loop through the candidates array and do the interaction.
Also, if it's available on your platform, you might want to look into pgrep. It's not ideal, but it may save you a few forks, which cost more than all the array lookups in the world.

Resources