Properly overwrite current line in terminal - terminal

In most terminals, if you haven't printed a newline character (or line feed; \n), printing a carriage return (\r) will reset your cursor to the beginning of the line so that subsequent characters overwrite what you've already output on the current line.
However, if you don't output enough characters to fully overwrite the previous contents of the line, the remaining characters will stay there. So, for example, the following pseudocode:
print "goodbye"
print "\rhello"
would result in helloye.
I'm wondering: is there any way to actually clear these remaining characters? I could simply keep track of them and then overwrite them with spaces, but that would, a) require me to keep track of them, and, b) still have trailing space characters, which isn't ideal, and I'd prefer not to do (I'm looking for a general solution that I can use whenever I come across this problem in the future). Any advice would be great; thanks!

Try using terminal escape
To clear from beginning of line to cursor: echo -e "\033[1K"
To clear line: echo -e "\033[2K"
Assuming you have VT100-compatible terminal or emulator

I used a leading carriage return a long time ago and it worked pretty well. I just tried it again on Linux Gnome Terminal program and it doesn't seem to work: nothing shows up on the screen. Changed it back to using a trailing line feed and every line I print gets displayed, but not overwritten. I suspect the lack of a line feed is what is keeping it from getting actually sent to the display.
See this about flushing.

Related

What does writing "\r\027[1A\027[K" to stdout do?

I came across some code for chat application in the terminal (in OCaml) and swa this string (in ASCII?) "\r\027[1A\027[K" being printed into the terminal before a new user message is printed to the terminal.
I have tried googling literals one by one, so I know that "\r" stands for cartridge return and \027 for ESC in ASCII, but what does "[1A" and "[K" do? What character encoding is this?
And finally, what is the aggregate effect of this command?
[ introduces a control sequence. A is the control sequence for "cursor up", and [1A moves the cursor up 1 line. K erases a line. So \x1b[1A\x1b[K moves up one line and deletes it (replaces it with spaces).
Of course, that is only valid if the terminal that receives that string recognizes the control sequences. Not all do.
See https://en.wikipedia.org/wiki/ANSI_escape_code
I'm not sure what 027 is trying to do. It seems like an error and should have been 033.

VIM for Windows: How to edit to the end of line but before EOL character

What I am trying to do:
For example, I have a line in my code that looks like this
something.something()
I would like to add print() around it:
print(something.something())
How I am doing it:
I type in vim: ^c$print()<Esc>P meaning:
put cursor to the beginning of the line,
change entire line,
type in print(),
paste entire line back before print's ).
The Problem:
Unfortunately the c$ part cut the EOL character as well. so the subsequent P operation will just paste the line on top of print(). So the end result will look like this:
something.something()
print()
My Thoughts:
Right now the work around is using v mode to highlight entire line except for the EOL character first, then do the above.
I am looking for something akin to ct$ ci$, but none of them works. my line doesn't always end with (), it could be __dict__ or just plain text, so cf) is handy but I am looking for something more universal. Thanks.
Of course it's doable out of the box.
Assuming your description of what you are doing is exact, the reason what you are doing doesn't work is most likely caused by something in your config because c$ (or its better alternative C) should never yank the EOL.
Here is a demonstration using your method as described in your question:
^c$print()<Esc>P
and the method I would use:
^Cprint(<C-r>")<Esc>
I don't think you want to be going into edit mode at all. Just do:
:s/something.something()/print(&)/g
Note that you can do this pretty easily interactively (eg, you don't have to type 'something.something()') by yanking something.something into the unnamed register (eg, put your cursor on the text and hit 'yiw', but what gets yanked exactly will depend on the current setting of iskeyword), and typing :s/<ctrl>r"/...
Or, as Christian Gibbons points out in the comments, if you want to replace the entire line you can simply do:
:s/.*/print(&)
Try ^cg_print()<Esc>P.
The g_ movement means "to the last non-blank character of the line", and since in Windows it appears the carriage return is part of the line if you yank/delete, using _g instead of $ on Windows may be advisable.
If you find yourself almost never needing $, you can swap the two commands in your .vimrc:
onoremap g_ $
onoremap $ g_

How can I make my terminal prompt extend the width of the terminal?

I noticed in this video, that the terminal prompt extends the entire width of the terminal before breaking down to a new line. How can I set my PS1 variable to fill the remaining terminal space with some character, like the way this user did?
The issue is, I don't know how to update the PS1 variable per command. It seems to me, that the string value for PS1 is only read in once just as the .bashrc file is only read in once. Do I have to write some kind of hook after each command or something?
I should also point out, that the PS1 variable will be evaluated to a different length based on the escape characters that make up it. For example, \w print the path.
I know I can get the terminal width using $(COLUMNS), and the width of the current PS1 variable with ${#PS1}, do the math, and print the right amount of buffer characters, but how do I get it to update everytime. Is there a preferred way?
Let's suppose you want your prompt to look something like this:
left text----------------------------------------------------------right text
prompt$
This is pretty straight-forward provided that right text has a known size. (For example, it might be the current date and time.) What we do is to print the right number of dashes (or, for utf-8 terminals, the prettier \u2500), followed by right text, then a carriage return (\r, not a newline) and the left text, which will overwrite the dashes. The only tricky bit is "the right number of dashes", but we can use $(tput cols) to see how wide the terminal is, and fortunately bash will command-expand PS1. So, for example:
PS1='\[$(printf "%*s" $(($(tput cols)-20)) "" | sed "s/ /-/g") \d \t\r\u#\h:\w \]\n\$ '
Here, $(($(tput cols)-20)) is the width of the terminal minus 20, which is based on \d \t being exactly 20 characters wide (including the initial space).
PS1 does not understand utf-8 escapes (\uxxxx), and inserting the appropriate substitution into the sed command involves an annoying embedded quote issue, although it's possible. However, printf does understand utf-8 escapes, so it is easier to produce the sequence of dashes in a different way:
PS1='\[$(printf "\\u2500%.0s" $(seq 21 $(tput cols))) \d \t\r\u#\h:\w \]\n\$ '
Yet another way to do this involves turning off the terminal's autowrap, which is possible if you are using xterm or a terminal emulator which implements the same control codes (or the linux console itself). To disable autowrap, output the sequence ESC[?7l. To turn it back on, use ESC[?7h. With autowrap disabled, once output reaches the end of a line, the last character will just get overwritten with the next character instead of starting a new line. With this technique, it's not really necessary to compute the exact length of the dash sequence; we just need a string of dashes which is longer than any console will be wide, say the following:
DASHES="$(printf '\u2500%0.s' {1..1000})"
PS1='\[\e[?7l\u#\h:\w $DASHES \e[19D \d \t\e[?7h\]\n\$ '
Here, \e[19D is the terminal-emulator code for "move cursor backwards 19 characters". I could have used $(tput cub 19) instead. (There might be a tput parameter for turning autowrap on and off, but I don't know what it would be.)
The example in the video also involves inserting a right-aligned string in the actual command line. I don't know any clean way of doing this with bash; the console in the video is almost certainly using zsh with the RPROMPT feature. Of course, you can output right-aligned prompts in bash, using the same technique as above, but readline won't know anything about them, so as soon as you do something to edit the line, the right prompt will vanish.
Use PROMPT_COMMAND to reset the value of PS1 before each command.
PROMPT_COMMAND=set_prompt
set_prompt () {
PS1=...
}
Although some system script (or you yourself) may already use PROMPT_COMMAND for something, in which case you can simply add to it.
PROMPT_COMMAND="$PROMPT_COMMAND; set_prompt"

How to escape unicode characters in bash prompt correctly

I have a specific method for my bash prompt, let's say it looks like this:
CHAR="༇ "
my_function="
prompt=\" \[\$CHAR\]\"
echo -e \$prompt"
PS1="\$(${my_function}) \$ "
To explain the above, I'm builidng my bash prompt by executing a function stored in a string, which was a decision made as the result of this question. Let's pretend like it works fine, because it does, except when unicode characters get involved
I am trying to find the proper way to escape a unicode character, because right now it messes with the bash line length. An easy way to test if it's broken is to type a long command, execute it, press CTRL-R and type to find it, and then pressing CTRL-A CTRL-E to jump to the beginning / end of the line. If the text gets garbled then it's not working.
I have tried several things to properly escape the unicode character in the function string, but nothing seems to be working.
Special characters like this work:
COLOR_BLUE=$(tput sgr0 && tput setaf 6)
my_function="
prompt="\\[\$COLOR_BLUE\\] \"
echo -e \$prompt"
Which is the main reason I made the prompt a function string. That escape sequence does NOT mess with the line length, it's just the unicode character.
The \[...\] sequence says to ignore this part of the string completely, which is useful when your prompt contains a zero-length sequence, such as a control sequence which changes the text color or the title bar, say. But in this case, you are printing a character, so the length of it is not zero. Perhaps you could work around this by, say, using a no-op escape sequence to fool Bash into calculating the correct line length, but it sounds like that way lies madness.
The correct solution would be for the line length calculations in Bash to correctly grok UTF-8 (or whichever Unicode encoding it is that you are using). Uhm, have you tried without the \[...\] sequence?
Edit: The following implements the solution I propose in the comments below. The cursor position is saved, then two spaces are printed, outside of \[...\], then the cursor position is restored, and the Unicode character is printed on top of the two spaces. This assumes a fixed font width, with double width for the Unicode character.
PS1='\['"`tput sc`"'\] \['"`tput rc`"'༇ \] \$ '
At least in the OSX Terminal, Bash 3.2.17(1)-release, this passes cursory [sic] testing.
In the interest of transparency and legibility, I have ignored the requirement to have the prompt's functionality inside a function, and the color coding; this just changes the prompt to the character, space, dollar prompt, space. Adapt to suit your somewhat more complex needs.
#tripleee wins it, posting the final solution here because it's a pain to post code in comments:
CHAR="༇"
my_function="
prompt=\" \\[`tput sc`\\] \\[`tput rc`\\]\\[\$CHAR\\] \"
echo -e \$prompt"
PS1="\$(${my_function}) \$ "
The trick as pointed out in #tripleee's link is the use of the commands tput sc and tput rc which save and then restore the cursor position. The code is effectively saving the cursor position, printing two spaces for width, restoring the cursor position to before the spaces, then printing the special character so that the width of the line is from the two spaces, not the character.
(Not the answer to your problem, but some pointers and general experience related to your issue.)
I see the behaviour you describe about cmd-line editing (Ctrl-R, ... Cntrl-A Ctrl-E ...) all the time, even without unicode chars.
At one work-site, I spent the time to figure out the diff between the terminals interpretation of the TERM setting VS the TERM definition used by the OS (well, stty I suppose).
NOW, when I have this problem, I escape out of my current attempt to edit the line, bring the line up again, and then immediately go to the 'vi' mode, which opens the vi editor. (press just the 'v' char, right?). All the ease of use of a full-fledged session of vi; why go with less ;-)?
Looking again at your problem description, when you say
my_function="
prompt=\" \[\$CHAR\]\"
echo -e \$prompt"
That is just a string definition, right? and I'm assuming your simplifying the problem definition by assuming this is the output of your my_function. It seems very likely in the steps of creating the function definition, calling the function AND using the values returned are a lot of opportunities for shell-quoting to not work the way you want it to.
If you edit your question to include the my_function definition, and its complete use (reducing your function to just what is causing the problem), it may be easier for others to help with this too. Finally, do you use set -vx regularly? It can help show how/wnen/what of variable expansions, you may find something there.
Failing all of those, look at Orielly termcap & terminfo. You may need to look at the man page for your local systems stty and related cmds AND you may do well to look for user groups specific to you Linux system (I'm assuming you use a Linux variant).
I hope this helps.

Emacs showing ^M in a process buffer

At the moment, I have a process-buffer which is utf-8-auto (emacs modeline reports the buffer as utf-8-auto-dos) with CRLF style newlines. When I write multi-line text into the buffer via a process-send-region or process-send-string each line is suffixed with ^M.
What makes this problem odd is that text written to the process-buffer directly from the process, does not contain ^M's.
It doesn't seem to make any difference where the source text comes from, in fact, even a multi-line region marked and sent that already appears in the process buffer (that doesn't contain ^M) will have them when sent.
(Note the source text for the process-send-region will always come from a Emacs buffer, process-send-string, when multi-line will be from the Windows clipboard interface to the killring, or again from an Emacs buffer to killring.)
I should also add that the incoming text to the buffer is parsed by a after-change-functions hook (to do some colorisation based on input) so a last resort I'd do an additional regexp-replace-in-string on this incoming text as part of that hook function, I'd like to avoid that because it seems wrong, but I'll add it as a hacky solution if nothing else works.
Addendum
I updated the encoding settings for the buffer and the process to use utf-8-dos instead of utf-8-auto and the ^M's vanished.
So in the buffer setup part of my app, I did...
(switch-to-buffer "sock-buffer")
(set-process-coding-system (get-process sock-process) 'utf-8-dos 'utf-8-dos)
(set-buffer-file-coding-system 'utf-8-dos nil)
(set-buffer-process-coding-system 'utf-8-dos 'utf-8-dos)
Then reduced this to just...
(switch-to-buffer "sock-buffer")
(set-buffer-process-coding-system 'utf-8-dos 'utf-8-dos)
And everything worked fine.
This is because those files are in DOS/Windows line endings. You can use C-x [Enter] f unix [Enter] to convert them to the Unix encoding.
^L is a page break. I've seen them some times to separate different parts of source code (for old-fashioned listings in a text printer), or in text documentation to insert an actual "new page" command.
As of the update, here you can see that you have to select set-process-coding-system to the correct coding system.
Alternately to the dos2unix approach, you could use one of the MULE commands in Emacs, or (my favorite), since these characters are mistakenly treated as part of the text, you can replace them using the command to replace a string in the text: M-% C-q C-M RETURN
M-% is the query-replace command.
C-q means "let me type the next character without interpreting it as the RETURN key".
I believe you see those because of the inconsistencies in your newlines (e.g. windows newlines vs *nux ones), you should probably try dos2unix

Resources