Git Bash eating last character on each line - bash

I am having a problem with Git Bash on Windows. It is always eating the last character on every line which wraps, leading me to sometimes having a repeated letter in my commit messages. Any ideas on how to solve this?
You can see this in the example below, the commit message is correct but not fully displayed.

Never mind, I figured out what the problem was. This only happens when the console overflows, which is why that initial letter g is there. Essentially the program doesn't know that initial g is there, but it still terminates the line after the same amount of characters, so the last letter is swallowed. However, it does not ignore your input, it just doesn't display it.

Related

Line feed - Is \n\r valid?

In an Windows environment:
\r (Carriage Return) moves the cursor to the beginning of the line without advancing to the next line.
\n (Line Feed) moves the cursor down to the next line without returning to the beginning of the line.
According to these definitions, \r\n and \n\r could be used interchangeably, am I right?
It should not matter whether you go down then left or left then down.
This question is similar, but it tells what everyone is used to, instead of explaining why.
Depends on the context.
If you're controlling an actual typewriter: Yes, they're the same.
If you're talking about file formats or network protocols: No, you have to use the right sequence of bytes (regardless of the reason those bytes were chosen originally).

understanding CUB ansi escape sequence

i don't understand why sometimes the CUB sequence is allowed to continue through the previous line and sometimes not. the documentation tells it's not but in real situation...
http://vt100.net/docs/vt100-ug/chapter3.html#CUB
for exemple i have a screen filled with spaces on 80 columns and 24 lines.
i am at position line 3, column 4 which can be set with escape sequence : \033[3;4H
i move the cursor on the left 10 times with sequence \033[10D
wich will put me at position : line 2 column 76
so it worked ?!!
and sometimes it doesn't
please save me ! :)
I can reduce the situation but this is where i saw it :
I'm writing a vt* emulator and everithing works fine. i can launch emacs/vim and others but then i launched vttest in putty with the "script" command to record every typed characters and sequences. like so :
# script test
Script started, file is test
# vttest
...
when i do "cat test" in putty for exemple, it replays everythings like i did. when i play it with my emulator i am able to parse and to analyse every escape sequence they provide, but the display is not the same.
The wording on VT100.net is fairly clear:
If an attempt is made to move the cursor to the left of the left margin,
the cursor stops at the left margin.
In a recent discussion, someone pointed out that PuTTY honors a (non-VT100) capability bw, which quoting from ncurses' terminfo manual:
auto_left_margin bw bw cub1 wraps from col‐
umn 0 to last column
PuTTY's behavior for wrapping at the margins differs from VT100s, as you have seen. ncurses has a terminal entry named "putty", simply because PuTTY differs from all of the other terminals enough to make using a nuisance otherwise.

Incrementally reading logs

Looked around with numerous search strings but can't find anything quite like this:
I'm writing a custom log parser (ala analog or webalizer except not for webserver) and I want to be able to skip the hard work for the lines that have already been parsed. I have thought about using a history file like webalizer but have no idea how it actually works internally and my C is pretty poor.
I've considered hashing each line and writing the hashes out, then parsing the history file for their presence but I think this will perform poorly.
The only other method I can think of is storing the line number of the last parse and skipping until that number is reached the next time round. What happens when the log is rotated I am not sure.
Any other ideas would be appreciated. I will be writing the parser in ruby but tips in a similar language will help as well.
The solutions I can think of right now are bound to be brittle.
Even if you store the line number and later realize it would be past the length of the current file, what happens if old lines have been trimmed? You would start reading (well) after the last position.
If, on the other hand, you are sure your log files won't be tampered with and they will only be rotated, I only see two ways of doing what you want, and I'm not sure the second is applicable to you.
Anyway, here goes.
First solution
You store the last line you parsed along with a timestamp. At the next run, you consider all the rotated log files sorting them by their last modified date, figure out which one you read last time, and start reading from there.
I didn't think this through, there might be funny corner cases you will need to handle.
Second solution
You create a background script that continuously watches the log file. A quick search on Google turned out this gem, but I'm not sure if that's even an option for you. Even then, you might want to integrate this solution with the previous one just in case your daemon will get interrupted (because that's clearly bound to happen at some point).
As you read the file and parse the lines keep track of the byte count. Save that. On next read, try to seek to that byte offset in the file. If the file is smaller than the byte count, it's a new file so start at the beginning.

Printing on screen the percentage completed while my tcl script is running?

I have a tcl script which takes a few minutes to run (the execution time varies based on different configurations).
I want the users to have some kind of an idea of whether it's still executing and how long it would take to complete while the script executes.
Some of the ideas I've had so far:
1) Indicate it using ... which keep increasing with each internal command run or so. But again it doesn't really give a sense of how much more to go for a first time user.
2) Use the revolving slash which I've seen used many places.
3) Have an actual percentage completed output on screen. No idea if this is viable or how to go about it.
Does anyone have any ideas on what could be done so that users of the script understand what's going on and how to do this?
Also if I'm implementing it using ... , how do I get them to print the . on the same line each time. If I use puts to do this in the tcl script the . just gets printed on the next line.
And for the revolving slash, I would need to replace something which was already printed on screen. How can I do this with tcl?
First off, the reason you were having problems printing dots was that Tcl was buffering its output, waiting for a new line. That's often a useful behavior (often enough that it's the default) but it isn't wanted in this case so you turn it off with:
fconfigure stdout -buffering none
(The other buffering options are line and full, which offer progressively higher levels of buffering for improved performance but reduced responsiveness.)
Alternatively, do flush stdout after printing a dot. (Or print the dots to stderr, which is unbuffered by default due to mainly being for error messages.)
Doing a spinner isn't much harder than printing dots. The key trick is to use a carriage return (a non-printable character sometimes visualized as ^M) to move the cursor position back to the start of the line. It's nice to factor the spinner code out into a little procedure:
proc spinner {} {
global spinnerIdx
if {[incr spinnerIdx] > 3} {
set spinnerIdx 0
}
set spinnerChars {/ - \\ |}
puts -nonewline "\r[lindex $spinnerChars $spinnerIdx]"
flush stdout
}
Then all you need to do is call spinner regularly. Easy! (Also, print something over the spinner once you've finished; just do puts "\r$theOrdinaryMessage".)
Going all the way to an actual progress meter is nice, and it builds on these techniques, but it requires that you work out how much processing there is to do and so on. A spinner is much easier to implement! (Especially if you've not yet nailed down how much work there is to do.)
The standard output stream is initially line buffered, so you won't see new output until you write a newline character, call flush or close it (which is automatically done when your script exits). You could turn this buffering off with...
fconfigure stdout -buffering none
...but diagnostics, errors, messages, progress etc should really be written to the stderr stream instead. It has buffering set to none by default so you won't need fconfigure.

How to see which line of code "next" would execute in gdb?

While debugging some code in gdb, I want to see which line will be executed if I say next or step.
Of course I can say l, but if I say l a couple times (and don't remember how many times), then l does not print the line that will be executed.
I can also scroll back to the last time gdb stopped and see which line it was at, but that sometimes involve digging through a bunch of output.
I am wondering if I am missing a simple command in gdb which shows me the current line the debugger is stopped at?
To see the current line the debugger stopped at, you can use frame command with no arguments. This achieves the same effect as update command. It works both in tui and command line mode.
You can use
list *$eip
or the shorter form
l *$eip
This will instruct gdb to print the source lines near the current program counter.
You can say l +0; the current line will be the first one listed.
The command l +offset lists the code starting from offset lines from the current line.
Note that, if you have already used the list command, the current line will have changed, i.e., it will no longer be the next executing line. So this will only work on your first list command.
It sounds like you want to run GDB in Emacs (which will show you current file and mark the current line), in DDD, or in tui mode.

Resources