Is there's a way to update READLINE_LINE while running a command, and update it again when the command is complete?
Looking at Bash builtins docs:
-x keyseq:shell-command
Cause shell-command to be executed whenever keyseq is entered. When shell-command is executed, the shell sets the READLINE_LINE variable to the contents of the Readline line buffer and the READLINE_POINT and READLINE_MARK variables to the current location of the insertion point and the saved insertion point (the mark), respectively. If the executed command changes the value of any of READLINE_LINE, READLINE_POINT, or READLINE_MARK, those new values will be reflected in the editing state.
It's not very explicit, perhaps it means that the prompt is updated only once, after the shell-command is complete?
For instance, if I add this to .bashrc
bind -x '"\C-g":foobar'
foobar()
{
READLINE_LINE="# working on it..."
sleep 10
READLINE_LINE="# foo bar done"
}
when I press Ctrl-g, whatever typed disappears, leaving the prompt empty for 10 seconds, until it shows # foo bar done. The terminal never shows # working on it.... (Bash 5.1, tested on ubuntu and macos).
Any suggestion about changing the prompt multiple times, e.g. to show some progress in case the command takes long?
I left a comment but I guess you were looking for a proper answer. In this case, the reported behaviour is completely normal as of mentioned in here.
Quoting the previous answer: Environment variables are heritable, not shareable.
When you run a command on the command line, it runs in its own process, being those processes children of the shell process in which they run. That means that child processes can read all parent’s variables, but can‘t set new values back.
There are some exceptions to this behaviour, for example: printf, echo, kill, among others, which are built in commands for the given shell.
Related
I am writing a bash completion program in Golang. In fact, the program is its own completion program as it looks for the COMP_LINE environment variable and if it is present, it outputs the completion options, and if not, just proceeds to run the main program.
The completion is then installed with the following:
complete -C /path/to/my-program my-program
This works well. For most of my completions, I want a space to be added after the word has been completed, however for a few flags I do not want this to occur.
When completion is defined, you can set a -o nospace option to omit the trailing space when completing a word. However then all completions that need a space have to have one added explicitly to the completion word list.
Is there any way that my program can modify the complete opts dynamically based on what completion it is returning? Is this exposed as an environment variable that a completion command could set?
I would like to avoid having to append a space to all other completions just to avoid one in the edge case for the one flag I don't want that to happen on.
My Perl framework (Perinci::CmdLine) also does the same: the scripts are their own completion, activated using complete -C SCRIPTNAME SCRIPTNAME (when the script is in PATH). Completing using an external command has its pro's and con's compared to using shell function. To solve the problem you encountered, I output a dummy answer with an extra space. Since there are more than one answer, bash no longer automatically adds a space. So instead of just returning (in JSON notation):
["-o"]
you return:
["-o","-o "]
I also use this trick when doing path completion. To allow user completing a path by "drilling down", when there is a single directory match I output:
["dirname/","dirname/ "]
so the user can Tab again to drill down inside path instead of getting a space after "dirname/ " and having to backspace and Tab again.
What I want to do is simple: add a keybinding to one of my program using readline startup file inputrc but, in addition, as my program does not produce any output, I do not want the command name to appear on stdout.
What my problem is:
.inputrc content:
"\e[1;5A":'pipe_send\n'
When I hit ctrl+uparrow, on the command line appears "pipe_send":
[ alexkag#$$$$$:: / ]
$ pipe_send
What I'd like is not having pipe_send appear on the command line, just like the commands provided by readline such as history-search-backward, history-search-forward, etc.
Do you know any way to do that? Maybe shoudn't I use readline? Note: my keybinding must only be visible in bash, not to the whole system.
As mentioned in the comments by gniourf_gniourf the solution is:
bind -x '"\e[1;5A":pipe_send'
bind -x will tell bash to execute a command whenever a certain key is pressed:
-x keyseq:shell-command
Cause shell-command to be executed whenever keyseq is entered. When shell-command is executed, the shell sets the READLINE_LINE variable to the contents of the Readline line buffer and the READLINE_POINT variable to the current location of the insertion point. If the executed command changes the value of READLINE_LINE or READLINE_POINT, those new values will be reflected in the editing state.
\e[1;5A is the terminal code sent for CtrlUp
I ask because I recently made a change to a KornShell (ksh) script that was executing. A short while after I saved my changes, the executing process failed. Judging from the error message, it looked as though the running process had seen some -- but not all -- of my changes. This strongly suggests that when a shell script is invoked, the entire script is not read into memory.
If this conclusion is correct, it suggests that one should avoid making changes to scripts that are running.
$ uname -a
SunOS blahblah 5.9 Generic_122300-61 sun4u sparc SUNW,Sun-Fire-15000
No. Shell scripts are read either line-by-line, or command-by-command followed by ;s, with the exception of blocks such as if ... fi blocks which are interpreted as a chunk:
A shell script is a text file containing shell commands. When such a
file is used as the first non-option argument when invoking Bash, and
neither the -c nor -s option is supplied (see Invoking Bash), Bash
reads and executes commands from the file, then exits. This mode of
operation creates a non-interactive shell.
You can demonstrate that the shell waits for the fi of an if block to execute commands by typing them manually on the command line.
http://www.gnu.org/software/bash/manual/bashref.html#Executing-Commands
http://www.gnu.org/software/bash/manual/bashref.html#Shell-Scripts
It's funny that most OS'es I know, do NOT read the entire content of any script in memory, and run it from disk. Doing otherwise would allow making changes to the script, while running. I don't understand why that is done, given the fact :
scripts are usually very small (and don't take many memory anyway)
at some point, and shown in this thread, people would start making changes to a script that is already running anyway
But, acknowledging this, here's something to think about: If you decided that a script is not running OK (because you are writing/changing/debugging), do you care on the rest of the running of that script ? you can go ahead making the changes, save them, and ignore all output and actions, done by the current run.
But .. Sometimes, and that depends on the script in question, a subsequent run of the same script (modified or not), can become a problem since the current/previous run is doing an abnormal run. It would typically skip some stuff, or sudenly jump to parts in the script, it shouldn't. And THAT may be a problem. It may leave "things" in a bad state; particularly if file manipulation/creation is involved.
So, as a general rule : even if the OS supports the feature or not, it's best to let the current run finish, and THEN save the updated script. You can change it already, but don't save it.
It's not like in the old days of DOS, where you actually have only one screen in front of you (one DOS screen), so you can't say you need to wait on run completion, before you can open a file again.
No they are not and there are many good reasons for that.
One of the things you should keep in mind is that a shell is not an interpreter even if there are some similarities. Shells are designed to work with a stream of commands. Either from the TTY ,a PIPE, FIFO or even a socket.
The shell reads from its resource line by line until a EOF is returned by the kernel.
The most shells have no extra support for interpreting files. they work with a file as they would work with a terminal.
In fact this is considered to be a nice feature because you can do interesting stuff like this How do Linux binary installers (.bin, .sh) work?
You can use a binary file and prepend shell scripts. You can't do this with an interpreter. because it parses the whole file or at least it would try it and fail. A shell would just interpret it line by line and doesnt care about the garbage at the end of the file. You just have to make sure the execution of the script gets terminated before it reaches the binary part.
Hi. I'm new to the shell and am working on my first kludged together script. I've read all over the intertube and SO and there are many, MANY places where disown, nohup, & and return are explained but something isn't working for me.
I want a simpler timer. The script asks for user input for the hours, mins., etc., then:
echo "No problem, see you then…"
sleep $[a*3600+b*60+c]
At this point (either on the first or second lines, not sure) I want the script OR the specific command in the script to become a background process. Maybe a daemon? So that the timer will still go off on schedule even if
that terminal window is shut
the terminal app is quit completely
the computer is put to sleep (I realize I probably need some different code still to wake the mac itself)
Also after the "No problem" line I want a return command so that the existing shell window is still useful in the meantime.
The terminal-notifier command (the timer wakeup) is getting called immediately under certain usage of the above (I can't remember which right now), then a second notification at the right time. Using the return command anywhere basically seems to quit the script.
One thing I'm not clear on is whether/how disown, nohup, etc. are applicable to a command process vs. a script process, i.e., will any of them work properly on only a command inside a script (and if not, how to initialize a script as a background process that still asks for input).
Maybe I should use some alternative to sleep?
It isn't necessary to use a separate script or have the script run itself in order to get part of it to run in the background.
A much simpler way is to place the portions that you want to be backgrounded (the sleep and following command) inside of parentheses, and put an ampersand after them.
So the end of the script would look like:
(
sleep $time
# Do whatever
)&
This will cause that portion of the code to be run inside a subshell which is placed into the background, since there's no code after that the first shell will immediately exit returning control to your interactive shell.
When your script is run, it is actually run by starting a new shell to execute it. In order for you to get your script into the background, you would need to send that shell into the background, which you can't do because you would need to communicate with its parent shell.
What you can do is have your script call itself with a special argument to indicate that it should do the work:
#! /bin/zsh
if [ "$1" != '--run' ] ; then
echo sending to background
$0 --run $# &
exit
fi
sleep 1
echo backgrounded $#
This script first checks to see if its first argument is --run. If it is not, then it calls itself ($0) with that argument and all other arguments it received ($#) in the background, and exits. You can use a similar method, performing the test when you want to enter the background, and possibly sending the data you will need instead of every argument. For example, to send just the number of seconds:
$0 --run $[a*3600+b*60+c] &
Is it possible to run a shell script and use its result as a user defined macro in Xcode?
Basically I just want the result of a shell script to be put in a variable so it gets set in Info.plist (just like ${EXECUTABLE_NAME} etc.)
For example:
If I add $(/usr/bin/whoami) as a build setting condition (at the bottom of settings of the build configuration) it just sets an empty string.
See this question for a couple of different approaches. All of them require add a "Run Script" build phase.
Assuming a bash like shell, and given an almost complete lack of context for your problem, try
EXECUTABLE_NAME=$( scriptToGetEXEC_NAME )
PRODUCT_NAME=$( scriptToGetPROD_NAME)
The $( ... cmd ... ) construct is called command substitution. What this means is the when the shell processor scans each line of code, if first looks to see if there are any $(...) embedded (and other things to). If there are, it spawns a new shell, executes the code inside, and if any text is returned, it is embedded in the command line and THEN the shell scans the line again, and eventually executes everything from left to right, assuming that the first word will turn into a built-in command or a command in the PATH.
I hope this helps.
P.S. as you appear to be a new user, if you get an answer that helps you please remember to mark it as accepted, and/or give it a + (or -) as a useful answer.