I have a simple question. I know that shell scripts are slow/ineffective when it comes to recursion and looping.
Generally, is it possible to read the input continuously instead of having to loop the read/"grab" part of the code, for instances when the input is continual and in plenty( a kind of EVENT DRIVEN scenario ).
For example,,
I use fedora16(gnome3.2) and for reasons unknown the capslock notification is missing. I own a netbook and don't have the "luxury" of indicator leds. So I've decided to write a shell script to notify me when the capslock key is pressed. I figured out a way to know the key state.
xset -q | grep Caps | awk '{print $4}'
that would give me "on"/"off" as the output. I can like have the loop to execute every one second(or less) but that would be a very crude way of doing it.
What you wrote is event-driven. xset -q produces some output, which only at that point (i.e. when it's produced) is consumed by grep. At that point, grep might produce some output (only if it matches Caps) and only in that case will awk process something.
The problem here is not bash - the "problem" is xset -q. It was not designed to continuously give you output. It was designed as a one-shot output command.
To touch the other part of the question - if you actually just need an indicator, look here:
https://askubuntu.com/questions/30334/what-application-indicators-are-available/37998#37998
An excellent source of all sorts of indicators. One of them is Keylock indicator (search the above page to see more info):
The above link is from askubuntu.com, i.e. it's Ubuntu-centric, but the above seems to be available for Fedora, too:
http://forums.fedoraforum.org/showthread.php?t=257835
From the above thread (this post by fewt):
su -
yum install lock-keys-applet
exit
killall -HUP gnome-panel
Hope this helps.
Related
One may want to use Bash on Windows in Task Scheduler or maybe as version-control hook scripts. Is it possible or supported?
If not, why? Is it a bug or a measure to prevent some issues?
Use #3d1t0r's solution, but also pipe to cat
wsl bash -c "man bash | cat" # noninteractive; streams the entire manpage to the terminal
wsl bash -c "man bash" # shows me the first page, and lets me scroll around; need to hit `q` to exit
If interactive mode is fine, bash -c is often superfluous
wsl man bash # same behavior as `wsl bash -c "man bash"`
Context (what in the world is "interactive" vs "non-interactive" invocation?):
The example above might not make it entirely clear, but man is changing its behavior based on what it's connected to.
In "interactive mode", man lets me scroll around, so that I can read the page at a comfortable reading pace.
In noninteractive mode, man dumps the entire manpage to the console, giving me no time to read anything.
"But wait," I hear you ask, "isn't man catting the man page because you asked it to? I see it right there--man bash | cat"
No, man has no idea what cat is. It just gets hints about whether STDOUT is connected to an interactive terminal.
Here's a different example, that consistently cats:
wsl bash -c "echo hey | grep --color e" # colors 'e' red
wsl bash -c "echo hey | grep --color e | cat" # colors disappear, what gives?
Now both examples are streaming their output, but the second one is defiantly ignoring my --color flag.
The common thread here is man and grep both behave appropriately depending on whether they think their output is going to be read by a human piped away somewhere.
Other common commands that auto-detect interactivity include ls and git. Usually the behavior change will involve output paging or colors (other variations exist).
paging is nice for humans, because humans generally can't read at the speed of streamed output.
paging is bad for robots, because paging is a lot of protocol overhead when you can just consume buffered streams. I mean seriously, why are humans so slow and chatty?
colors are nice for humans, because we like additional visual cues to aid visual distinction.
colors are bad for streaming to file, because your file will be full of ansi color code garbage, that most text editors don't display nicely.
Automatic behavior switching based on whether STDOUT is connected to an interactive terminal makes all these use cases usually "just work".
Restating the Original Question
In my use case and #bahrep's use case, interactive mode can be especially bad for unsupervised scripts (e.g. as launched by Task Scheduler). I am guessing #bahrep's scheduled runs hung on less getting invoked and waiting for human input.
For some reason, wsl-driven scripts launched from the task-scheduler give underlying scripts the wrong hints--they hint that the final output is attached to an interactive terminal.
Ideally, wsl would know from the windows side of the execution environment whether it is getting invoked interactively or not, and pass along the proper hint. Then I could just run wsl [command]. Until that happens, I'll need to use wsl bash -c "[command] | cat" as a workaround.
If I'm understanding your question correctly, the -c option is what you're looking for. It allows you to directly invoke a Linux command.
For example, to open the man page for bash (perhaps in order to find out about the -c option):
bash -c "man bash"
Note: You can leave off the quotes if you escape any spaces (e.g. bash -c man\ bash), but it's often easier to just use the quotes, as the first unescaped space will lose the rest of your command.
e.g. bash -c man bash will be interpreted the same as bash -c man.
Usually if I want to print the output of a command and in addition capture that output in a file, tee is the solution. But I'm making a script using a utility which seems to have a special behaviour. It's the wps wireless assessment tool bully.
If I run a bully command as normal (without tee), the output is shown in a standard way, step by step. But If I put the pipe at the end to log like this | tee "/path/to/my/logfile" the output on the screen freezes. It shows nothing until the command ends. And after ending, it shows everything in one shot (not step by step) and of course it puts the output also in the log tee file.
An example of bully command: bully wlan0mon -b 00:11:22:33:44:55 -c 8 -L -F -B -v 3 -p 12345670 | tee /root/Desktop/log.txt
Why? Not sure if it only happens with bully or if there are other programs with the same behaviour.
Is there another way to capture the output into a file having the output on screen in real time?
What you're seeing is full buffering vs line buffering. By default, when stdout is writing to a tty (i.e. interactive) you'll have line buffering, vs the default of being fully buffered otherwise. You can see the setvbuf(3) man page for a more detailed explanation.
Some commands offer an option to force line buffering (e.g. GNU grep has --line-buffered). But that sort of option is not widely available.
Another option is to use something like expect's unbuffer command if you want to be able to see the output more interactively (at the cost of depending on expect, of course).
Say you have a shell command like
cat file1 | ./my_script
Is there any way from inside the 'my_script' command to detect the command run first as the pipe input (in the above example cat file1)?
I've been digging into it and so far I've not found any possibilities.
I've been unable to find any environment variables set in the process space of the second command recording the full command line, the command data the my_script commands sees (via /proc etc) is just _./my_script_ and doesn't include any information about it being run as part of a pipe. Checking the process list from inside the second command even doesn't seem to provide any data since the first process seems to exit before the second starts.
The best information I've been able to find suggests in bash in some cases you can get the exit codes of processes in the pipe via PIPESTATUS, unfortunately nothing similar seems to be present for the name of commands/files in the pipe. My research seems to be saying it's impossible to do in a generic manner (I can't control how people decide to run my_script so I can't force 3rd party pipe replacement tools to be used over build in shell pipes) but it just at the same time doesn't seem like it should be impossible since the shell has the full command line present as the command is run.
(update adding in later information following on from comments below)
I am on Linux.
I've investigated the /proc/$$/fd data and it almost does the job. If the first command doesn't exit for several seconds while piping data to the second command can you read /proc/$$/fd/0 to see the value pipe:[PIPEID] that it symlinks to. That can then be used to search through the rest of the /proc//fd/ data for other running processes to find another process with a pipe open using the same PIPEID which gives you the first process pid.
However in most real world tests I've done of piping you can't trust that the first command will stay running long enough for the second one to have time to locate it's pipe fd in /proc before it exits (which removes the proc data preventing it being read). So if this method will return any information is something I can't rely on.
I want to copy a .bin file in a .img file using mcopy. For this, I can use mcopy -i image.img bin.bin ::. When using this, it will tell me: Long file name "bin.bin" already exists.
a)utorename A)utorename-all r)ename R)ename-all o)verwrite O)verwrite-all
s)kip S)kip-all q)uit (aArRoOsSq):. Due to the size and importance of stable files in this project, I just always want to put in O (small size and no importance, just so you know).
So I searched, and found this could be done by using the command: echo "O" | mcopy -i image.img bin.bin ::. Great. However, mcopy has a slight delay, due to which the echo does NOT enter O on the right time (too soon). I tried to use { sleep 2; echo "O"; } | mcopy -i image.img bin.bin ::, which helps nothing either.
So: How to actually echo text to a command after a delay, using bash?
(For the comments: adding -n to the mcopy command does neither work)
EDIT: There seemed to be some confusion about the purpose of the question, so I will try to clarify it. Point is, I have a problem and I want it solved. This could be done by using mcopy in an alternative way, as proposed in the comments already, OR by delaying the echo to the command (as is the question).
Even if my problem is solved in a way where the mcopy command is altered, that still would not answer the question. So please keep that in mind.
You're asking the wrong question, and you already know the answer to the question you're asking.
For the question "How to actually echo text to a command after a delay, using bash?", the answer is precisely:
{ sleep $DELAY; echo $TEXT; } | command
However, that should hardly ever be necessary. It provides the given text to command's standard input after the given delay, which may cause the command to wait a bit before proceeding with the read input. But there is (almost) never a case where the data needs to be delayed until the command is already waiting for it -- if the command is reading from standard input.
In the case of mtools, however, mcopy is not reading the clash code from standard input. Instead, it is reading it directly from /dev/tty, which is the terminal associated with the command. Redirecting standard input, which is what the bash pipe operator does, has no effect on /dev/tty. Consequently, the problem is not that you need to delay sending data to mcopy's standard input; the problem is that mcopy doesn't use standard input, and bash has no mechanism to hijack /dev/tty in order to fake user input.
So the other question might be "how to programmatically tell mcopy which clash option to use?", but apparently you know the answer to that one, too: use the -D command line option (which works with all relevant mtools utilities).
Finally, a more complicated question: "Is there some way to automate a utility which insists on reading input from /dev/tty?" Here, the answer is "yes" but the techniques are not so simple as just piping. The most common way is to use the expect utility, which allows you to spawn a subprocess whose /dev/tty is a pseudo-tty which expect can communicate with.
In a shell, I run following commands without problem,
ls -al
!ls
the second invocation to ls also list files with -al flag. However, when I put the above script to a bash script, complaints are thrown,
!ls, command not found.
how to realise the same effects in script?
You would need to turn on both command history and !-style history expansion in your script (both are off by default in non-interactive shells):
set -o history
set -o histexpand
The expanded command is also echoed to standard error, just like in an interactive shell. You can prevent that by turning on the histverify shell option (shopt -s histverify), but in a non-interactive shell, that seems to make the history expansion a null-op.
Well, I wanted to have this working as well, and I have to tell everybody that the set -o history ; set -o histexpand method will not work in bash 4.x. It's not meant to be used there, anyway, since there are better ways to accomplish this.
First of all, a rather trivial example, just wanting to execute history in a script:
(bash 4.x or higher ONLY)
#!/bin/bash -i
history
Short answer: it works!!
The spanking new -i option stands for interactive, and history will work. But for what purpose?
Quoting Michael H.'s comment from the OP:
"Although you can enable this, this is bad programming practice. It will make your scripts (...) hard to understand. There is a reason it is disabled by default. Why do you want to do this?"
Yes, why? What is the deeper sense of this?
Well, THERE IS, which I'm going to demonstrate in the follow-up section.
My history buffer has grown HUGE, while some of those lines are script one-liners, which I really would not want to retype every time. But sometimes, I also want to alter these lines a little, because I probably want to give a third parameter, whereas I had only needed two in total before.
So here's an ideal way of using the bash 4.0+ feature to invoke history:
$ history
(...)
<lots of lines>
(...)
1234 while IFS='whatever' read [[ $whatever -lt max ]]; do ... ; done < <(workfile.fil)
<25 more lines>
So 1234 from history is exactly the line we want. Surely, we could take the mouse and move there, chucking the whole line in the primary buffer? But we're on *NIX, so why can't we make our life a bit easier?
This is why I wrote the little script below. Again, this is for bash 4.0+ ONLY (but might be adapted for bash 3.x and older with the aforementioned set -o ... stuff...)
#!/bin/bash -i
[[ $1 == "" ]] || history | grep "^\s*$1" |
awk '{for (i=2; i<=NF; i++) printf $i" "}' | tr '\n' '\0'
If you save this as xselauto.sh for example, you may invoke
$ ./xselauto.sh 1234
and the contents of history line #1234 will be in your primary buffer, ready for re-use!
Now if anyone still says "this has no purpose AFAICS" or "who'd ever be needing this feature?" - OK, I won't care. But I would no longer want to live without this feature, as I'm just too lazy to retype complex lines every time. And I wouldn't want to touch the mouse for each marked line from history either, TBH. This is what xsel was written for.
BTW, the tr part of the pipe is a dirty hack which will prevent the command from being executed. For "dangerous" commands, it is extremely important to always leave the user a way to look before he/she hits the Enter key to execute it. You may omit it, but ... you have been warned.
P.S. This scriptlet is in fact a workaround, simulating !1234 typed on a bash shell. As I could never make the ! work directly in a script (echo would never let me reveal the contents of history line 1234), I worked around the problem by simply greping for the line I wanted to copy.
History expansion is part of the interactive command-line editing features of a shell, not part of the scripting language. It's not generally available in the context of a script, only when interacting with a (pseudo-)human operator. (pseudo meaning that it can be made to work with things like expect or other keystroke repeating automation tools that generally try to play act a human, not implying that any particular operator might be sub-human or anything).