I want to copy a .bin file in a .img file using mcopy. For this, I can use mcopy -i image.img bin.bin ::. When using this, it will tell me: Long file name "bin.bin" already exists.
a)utorename A)utorename-all r)ename R)ename-all o)verwrite O)verwrite-all
s)kip S)kip-all q)uit (aArRoOsSq):. Due to the size and importance of stable files in this project, I just always want to put in O (small size and no importance, just so you know).
So I searched, and found this could be done by using the command: echo "O" | mcopy -i image.img bin.bin ::. Great. However, mcopy has a slight delay, due to which the echo does NOT enter O on the right time (too soon). I tried to use { sleep 2; echo "O"; } | mcopy -i image.img bin.bin ::, which helps nothing either.
So: How to actually echo text to a command after a delay, using bash?
(For the comments: adding -n to the mcopy command does neither work)
EDIT: There seemed to be some confusion about the purpose of the question, so I will try to clarify it. Point is, I have a problem and I want it solved. This could be done by using mcopy in an alternative way, as proposed in the comments already, OR by delaying the echo to the command (as is the question).
Even if my problem is solved in a way where the mcopy command is altered, that still would not answer the question. So please keep that in mind.
You're asking the wrong question, and you already know the answer to the question you're asking.
For the question "How to actually echo text to a command after a delay, using bash?", the answer is precisely:
{ sleep $DELAY; echo $TEXT; } | command
However, that should hardly ever be necessary. It provides the given text to command's standard input after the given delay, which may cause the command to wait a bit before proceeding with the read input. But there is (almost) never a case where the data needs to be delayed until the command is already waiting for it -- if the command is reading from standard input.
In the case of mtools, however, mcopy is not reading the clash code from standard input. Instead, it is reading it directly from /dev/tty, which is the terminal associated with the command. Redirecting standard input, which is what the bash pipe operator does, has no effect on /dev/tty. Consequently, the problem is not that you need to delay sending data to mcopy's standard input; the problem is that mcopy doesn't use standard input, and bash has no mechanism to hijack /dev/tty in order to fake user input.
So the other question might be "how to programmatically tell mcopy which clash option to use?", but apparently you know the answer to that one, too: use the -D command line option (which works with all relevant mtools utilities).
Finally, a more complicated question: "Is there some way to automate a utility which insists on reading input from /dev/tty?" Here, the answer is "yes" but the techniques are not so simple as just piping. The most common way is to use the expect utility, which allows you to spawn a subprocess whose /dev/tty is a pseudo-tty which expect can communicate with.
Related
I have a bash script that includes a line like this:
matches="`grep --no-filename $searchText $files`"
In other words, I am assigning the result of a grep to a variable.
I recently found that that line of code seems to have a vulnerability: if the grep finds too many results, it annoyingly simply freezes execution.
First, if anyone can confirm that excessive output (and exactly what constitutes excessive) is a known danger with command substitution, please provide a solid link for me. I web searched, and the closest reference that I could find is in this link:
"Do not set a variable to the contents of a long text file unless you have a very good reason for doing so."
That hints that there is a danger, but is very inadequate.
Second, is there a known best practice for coping with this?
The behavior that I really want is for excessive output in command substitution
to generate a nice human readable error message followed by an error exit code so that my script will terminate instead of freeze. (Note: I always run my scripts with "set -e" as one of the initial lines). Is there any way that I can get this behavior?
Currently, the only solution that I know of is a hack that sorta works just for my immediate case: I can limit the output from grep using its --max-count option.
Ideally, you shouldn't capture data of unknown length into memory at all; if you read it as you need it, then grep will wait until the content is ready to use.
That is:
while IFS= read -r match; do
echo "Found a match: $match"
# example: maybe we want to look at whether a match exists on the filesystem
[[ -e $match ]] && { echo "Got what we needed!" >&2; break; }
done < <(grep --no-filename "$searchText" "${files[#]}")
That way, grep only writes a line when read is ready to consume it (and will block instead of needing to continue to read input if it has more output already produced than can be stored in the relatively small pipe buffer) -- so the names you don't need don't even get generated in the first place, and there's no need to allocate memory or deal with them in any other way.
In a shell, I run following commands without problem,
ls -al
!ls
the second invocation to ls also list files with -al flag. However, when I put the above script to a bash script, complaints are thrown,
!ls, command not found.
how to realise the same effects in script?
You would need to turn on both command history and !-style history expansion in your script (both are off by default in non-interactive shells):
set -o history
set -o histexpand
The expanded command is also echoed to standard error, just like in an interactive shell. You can prevent that by turning on the histverify shell option (shopt -s histverify), but in a non-interactive shell, that seems to make the history expansion a null-op.
Well, I wanted to have this working as well, and I have to tell everybody that the set -o history ; set -o histexpand method will not work in bash 4.x. It's not meant to be used there, anyway, since there are better ways to accomplish this.
First of all, a rather trivial example, just wanting to execute history in a script:
(bash 4.x or higher ONLY)
#!/bin/bash -i
history
Short answer: it works!!
The spanking new -i option stands for interactive, and history will work. But for what purpose?
Quoting Michael H.'s comment from the OP:
"Although you can enable this, this is bad programming practice. It will make your scripts (...) hard to understand. There is a reason it is disabled by default. Why do you want to do this?"
Yes, why? What is the deeper sense of this?
Well, THERE IS, which I'm going to demonstrate in the follow-up section.
My history buffer has grown HUGE, while some of those lines are script one-liners, which I really would not want to retype every time. But sometimes, I also want to alter these lines a little, because I probably want to give a third parameter, whereas I had only needed two in total before.
So here's an ideal way of using the bash 4.0+ feature to invoke history:
$ history
(...)
<lots of lines>
(...)
1234 while IFS='whatever' read [[ $whatever -lt max ]]; do ... ; done < <(workfile.fil)
<25 more lines>
So 1234 from history is exactly the line we want. Surely, we could take the mouse and move there, chucking the whole line in the primary buffer? But we're on *NIX, so why can't we make our life a bit easier?
This is why I wrote the little script below. Again, this is for bash 4.0+ ONLY (but might be adapted for bash 3.x and older with the aforementioned set -o ... stuff...)
#!/bin/bash -i
[[ $1 == "" ]] || history | grep "^\s*$1" |
awk '{for (i=2; i<=NF; i++) printf $i" "}' | tr '\n' '\0'
If you save this as xselauto.sh for example, you may invoke
$ ./xselauto.sh 1234
and the contents of history line #1234 will be in your primary buffer, ready for re-use!
Now if anyone still says "this has no purpose AFAICS" or "who'd ever be needing this feature?" - OK, I won't care. But I would no longer want to live without this feature, as I'm just too lazy to retype complex lines every time. And I wouldn't want to touch the mouse for each marked line from history either, TBH. This is what xsel was written for.
BTW, the tr part of the pipe is a dirty hack which will prevent the command from being executed. For "dangerous" commands, it is extremely important to always leave the user a way to look before he/she hits the Enter key to execute it. You may omit it, but ... you have been warned.
P.S. This scriptlet is in fact a workaround, simulating !1234 typed on a bash shell. As I could never make the ! work directly in a script (echo would never let me reveal the contents of history line 1234), I worked around the problem by simply greping for the line I wanted to copy.
History expansion is part of the interactive command-line editing features of a shell, not part of the scripting language. It's not generally available in the context of a script, only when interacting with a (pseudo-)human operator. (pseudo meaning that it can be made to work with things like expect or other keystroke repeating automation tools that generally try to play act a human, not implying that any particular operator might be sub-human or anything).
Using bash I want to read over a list of lines and ask the user if the script should process each line as it is read. Since both the lines and the user's response come from stdin how does one coordinate the file handles? After much searching and trial & error I came up with the example
exec 4<&0
seq 1 10 | while read number
do
read -u 4 -p "$number?" confirmation
echo "$number $confirmation"
done
Here we are using exec to reopen stdin on file handle 4, reading the sequence of numbers from the piped stdin, and getting the user's response on file handle 4. This seems like too much work. Is this the correct way of solving this problem? If not, what is the better way? Thanks.
You could just force read to take its input from the terminal, instead of the more abstract standard input:
while read number
do
< /dev/tty read -p "$number?" confirmation
echo "$number $confirmation"
done
The drawback is that you can't automate acceptance (by reading from a pipe connected to yes, for example).
Yes, using an additional file descriptor is a right way to solve this problem. Pipes can only connect one command's standard output (file descriptor 1) to another command's standard input (file descriptor 1). So when you're parsing the output of a command, if you need to obtain input from some other source, that other source has to be given by a file name or a file descriptor.
I would write this a little differently, making the redirection local to the loop, but it isn't a big deal:
seq 1 10 | while read number
do
read -u 4 -p "$number?" confirmation
echo "$number $confirmation"
done 4<&0
With a shell other than bash, in the absence of a -u option to read, you can use a redirection:
printf "%s? " "$number"; read confirmation <&4
You may be interested in other examples of using file descriptor reassignment.
Another method, as pointed out by chepner, is to read from a named file, namely /dev/tty, which is the terminal that the program is running in. This makes for a simpler script but has the drawback that you can't easily feed confirmation data to the script manually.
For your application, killmatching, two passes is totally the right way to go.
In the first pass you can read all the matching processes into an array. The number will be small (dozens typically, tens of thousands at most) so there are no efficiency issues. The code will look something like
set -A candidates
ps | grep | while read thing do candidates+=("$thing"); done
(Syntactic details may be wrong; my bash is rusty.)
The second pass will loop through the candidates array and do the interaction.
Also, if it's available on your platform, you might want to look into pgrep. It's not ideal, but it may save you a few forks, which cost more than all the array lookups in the world.
I have a simple question. I know that shell scripts are slow/ineffective when it comes to recursion and looping.
Generally, is it possible to read the input continuously instead of having to loop the read/"grab" part of the code, for instances when the input is continual and in plenty( a kind of EVENT DRIVEN scenario ).
For example,,
I use fedora16(gnome3.2) and for reasons unknown the capslock notification is missing. I own a netbook and don't have the "luxury" of indicator leds. So I've decided to write a shell script to notify me when the capslock key is pressed. I figured out a way to know the key state.
xset -q | grep Caps | awk '{print $4}'
that would give me "on"/"off" as the output. I can like have the loop to execute every one second(or less) but that would be a very crude way of doing it.
What you wrote is event-driven. xset -q produces some output, which only at that point (i.e. when it's produced) is consumed by grep. At that point, grep might produce some output (only if it matches Caps) and only in that case will awk process something.
The problem here is not bash - the "problem" is xset -q. It was not designed to continuously give you output. It was designed as a one-shot output command.
To touch the other part of the question - if you actually just need an indicator, look here:
https://askubuntu.com/questions/30334/what-application-indicators-are-available/37998#37998
An excellent source of all sorts of indicators. One of them is Keylock indicator (search the above page to see more info):
The above link is from askubuntu.com, i.e. it's Ubuntu-centric, but the above seems to be available for Fedora, too:
http://forums.fedoraforum.org/showthread.php?t=257835
From the above thread (this post by fewt):
su -
yum install lock-keys-applet
exit
killall -HUP gnome-panel
Hope this helps.
This is probably a newbie's escaping problem. I'm trying run command in a for loop like this
$ for SET in `ls ../../mybook/WS/wsc_production/`; do ~/sandbox/scripts/ftype-switch/typesort.pl /media/mybook/WS/wsc_production/$SET ./wsc_sorter/$SET | tee -a sorter.log; done;
but I end up with sorter.log being empty. (I'm sure there is some output.) If I escape the pipe symbol (\|), I end up with no sorter.log at all.
What am I doing wrong?
$ bash --version
GNU bash, version 4.1.5(1)-release (i486-pc-linux-gnu)
Edit: Oops, /media/mybook/ fell asleep, so there actually was no output. The code was correct in the first place. Thanks to all for comments, though.
Glenn said it well. I would like to offer a different angle: you can move the 'tee' command outside of the for loop. The advantage to this approach is tee is invoked only once:
dir1=$HOME/sandbox/scripts/ftype-switch
dir2=/media/mybook/WS/wsc_production
for SET in ../../mybook/WS/wsc_production/*; do
$dir1/typesort.pl $dir2/$SET ./wsc_sorter/$SET 2>&1
done | tee -a sorter.log
You're using tee, so if there is output, you'd see it on your terminal. What do you see?
If you see output, it's probably stderr you're seeing, so you might want to redirect it:
dir1=$HOME/sandbox/scripts/ftype-switch
dir2=/media/mybook/WS/wsc_production
for SET in ../../mybook/WS/wsc_production/*; do
$dir1/typesort.pl $dir2/$SET ./wsc_sorter/$SET 2>&1 | tee -a sorter.log
done
My deepest apologies, the problem was somewhere else and my script actually did not output anything at all. Now it works.
Two reasons why I got the illusion that the problem is in escaping:
of course, lack of confidence in bash scripting, which is effect of lack of knowledge and experience
and also, lack of attention--it did not come into my mind that the disk on USB fell asleep, so when I tried the loop there actually was no output
Well, that's for some stumbling on my way to knowledge... :)