Why does application output freeze on breakpoint in QtCreator? - qt-creator

I've found that stopping a program in QtCreator at a breakpoint stops the program output. For example, the following program gives this output when it hits the breakpoint:
#include <QDebug>
#include <unistd.h>
int main(int argc, char *argv[])
{
for (int i = 0; i < 20; ++i) {
qDebug() << "Printing line " << i << "\n";
}
int a = 0; // <-breakpoint here
}
Output:
07:50:37: Debugging starts
2020-09-04 07:50:44.162658-0700 editor[3615:65377] Printing line 0
2020-09-04 07:50:44.162689-0700 editor[3615:65377] Printing line 1
2020-09-04 07:50:44.162695-0700 editor[3615:65377] Printing line 2
2020-09-04 07:50:44.162699-0700 editor[3615:65377] Printing line 3
2020-09-04 07:50:44.162702-0700 editor[3615:65377] Printing line 4
2020-09-04 07:50:44.162706-0700 editor[3615:65377] Printing line 5
2020-09-04 07:50:44.162709-0700 editor[3615:65377] Printing line 6
2020-09-04 07:50:44.162713-0700 editor[3615:65377] Printing line 7
2020-09-04 07:50:44.162716-0700 editor[3615:65377] Printing line 8
2020-09-04 07:50:44.162720-0700 editor[3615:65377] Printing line 9
2020-09-04 07:50:44.162724-0700 editor[3615:65377] Printing line 10
2020-09-04 07:50:44.162727-0700 editor[3615:65377] Printing line 11
2020-09-04 07:50:44.162731-0700 editor[3615:65377] Printing line 12
2020-09-04 07:50:44.162735-0700 editor[3615:65377] Printing line 13
2020-09-04 07:50:44.162751-0700 editor[3615:65377] Pri
It just stops mid string like this. I've tried redirecting output to the terminal, but that just exits right away, without outputting anything. If I add usleep(20000); above the print line, it will print out all (20) of the strings. If I add usleep(10000); it will get to number 16.
I'm using QtCreator 4.13.0, Qt 5.15.0.
I'm using Clang that came with MacOS Catalina (x86 64bit in /usr/bin).
I'm using LLDB that came with it (also in /usr/bin/lldb).
This is a fresh install of the OS, there is nothing else installed on the system except the programs that came with it. MacOS version is 10.15.5.

Related

Why \r required when using tput el1

While creating a countdown timer in bash, I came up with the following code:
for ((n=15; n > 0; n--)); do
printf "Reload | $n" && sleep 1
done
This works fine, it keeps adding the printf to the same line, as expected.
So I opened the documentation on tput and found:
tput el1
Clear to beginning of line
And tried it out:
for ((n=15; n > 0; n--)); do
tput el1; printf "Reload | $n" && sleep 1
done
However, this adds a tab on each iteration, so the output becomes:
Reload | 15
Reload | 14
Reload | 13
Those outputs are on the same line, so the 'clear' works but for some reason is the cursor not restored to the first column.
I've managed to fix it by adding a carriage return (\r) behind the printf:
for ((n=15; n > 0; n--)); do
tput el1; printf "Reload | $n\r" && sleep 1
done
I've read quite some docs, but can't grasp why the \r is needed here. Please point me to the right documentation/duplicate about this matter.
Compare el1
tput el1
Clear to beginning of line
and clear
tput clear
clear screen and home cursor
Note that clear explicitly states that it moves the cursor; el1 does not. It only erases whatever was on the current line between the start of the line and the current cursor position, leaving the cursor where it is for the following text.
The carriage return, on the other hand, is typically interpreted as moving the cursor to the beginning of the current line without advancing to the next line. The corresponding terminal capability would be cr.
A more robust solution would be to move the cursor first, then clear to the end of the line, and finally output your next bit of text. This handles the case where a new bit of text is shorter than the previous, which will be an issue when you switch from double-digit to single-digit numbers.
for ((n=15; n>0; n--)); do
tput cr el; printf "Reload | %2d" "$n"; sleep 1
done
(The %2d is to ensure the single-digit values don't "jump" one space to the left.)

Auto-insert blank lines in `tail -f`

Having a log file such as:
[DEBUG][2016-06-24 11:10:10,064][DataSourceImpl] - [line A...]
[DEBUG][2016-06-24 11:10:10,069][DataSourceImpl] - [line B...]
[DEBUG][2016-06-24 11:10:12,112][DataSourceImpl] - [line C...]
which is under tail -f real-time monitoring, is it possible to auto-insert (via a command we would pipe to the tail) "blank lines" after, let's say, 2 seconds of inactivity?
Expected result:
[DEBUG][2016-06-24 11:10:10,064][DataSourceImpl] - [line A...]
[DEBUG][2016-06-24 11:10:10,069][DataSourceImpl] - [line B...]
---
[DEBUG][2016-06-24 11:10:12,112][DataSourceImpl] - [line C...]
(because there is a gap of more than 2 seconds between 2 successive lines).
awk -F'[][\\- ,:]+' '1'
The above will split fields on ], [, -, ,, and :, so that each field is as described below:
[DEBUG][2016-06-24 11:10:10,064][DataSourceImpl] - [line A...]
22222 3333 44 55 66 77 88 999 ...
You can then concatenate some of the fields and use that to measure time difference:
tail -f input.log | awk -F'[][\\- ,:]+' '{ curr=$3$4$5$6$7$8$9 }
prev + 2000 < curr { print "" } # Print empty line if two seconds
# have passed since last record.
{ prev=curr } 1'
tail does not have such feature. If you want you could implement a program or script that checks the last line of the file; something like (pseudocode)
previous_last_line = last line of your file
while(sleep 2 seconds)
{
if (last_line == previous_last_line)
print newline
else
print lines since previous_last_line
}
Two remarks:
this will cause you to have no output during 2 seconds; you could keep checking for the last line more often and keep a timestamp; but that requires more code...
this depends on the fact that all lines are unique; which is reasonable in your case; since you have timestamps in each line

sed print between two line patterns only if both patterns are found

Suppose I have a file with:
Line 1
Line 2
Start Line 3
Line 4
Line 5
Line 6
End Line 7
Line 8
Line 9
Start Line 10
Line 11
End Line 12
Line 13
Start line 14
Line 15
I want to use sed to print between the patterns only if both /Start/ and /End/ are found.
sed -n '/Start/,/End/p' works as expected if you know both markers are there and in the order expected, but it just prints from Start to the end of the file if End is either out of order or not present. (i.e., prints line 14 and line 15 in the example)
I have tried:
sed -n '/Start/,/End/{H;}; /End/{x; p;}' file
Prints:
# blank line here...
Start Line 3
Line 4
Line 5
Line 6
End Line 7
End Line 7
Start Line 10
Line 11
End Line 12
which is close but two issues:
Unwanted leading blank line
End Line 7 printed twice
I am hoping for a result similar to
$ awk '/Start/{x=1} x{buf=buf$0"\n"} /End/{print buf; buf=""; x=0}' file
Start Line 3
Line 4
Line 5
Line 6
End Line 7
Start Line 10
Line 11
End Line 12
(blank lines between the blocks not necessary...)
With GNU sed and sed from Solaris 11:
sed -n '/Start/{h;b;};H;/End/{g;p;}' file
Output:
Start Line 3
Line 4
Line 5
Line 6
End Line 7
Start Line 10
Line 11
End Line 12
If Start is found copy current pattern space to hold space (h) and branch to end of script (b). For every other line append current pattern space to hold space (H). If End is found copy hold space back to pattern space (g) and then print pattern space (p).
GNU sed: after encountering Start, keep appending lines as long as we don't see End; once we do, print the pattern space and start over:
$ sed -n '/Start/{:a;N;/End/!ba;p}' infile
Start Line 3
Line 4
Line 5
Line 6
End Line 7
Start Line 10
Line 11
End Line 12
Getting the newline between blocks is tricky. This would add one after each block, but results in an extra blank at the end:
$ sed -n '/Start/{:a;N;/End/!ba;s/$/\n/p}' infile
Start Line 3
Line 4
Line 5
Line 6
End Line 7
Start Line 10
Line 11
End Line 12
[blank]
You can use this awk:
awk 'x{buf=buf ORS $0} /Start/{x=1; buf=$0} /End/{print buf; buf=""; x=0}' file
Start Line 3
Line 4
Line 5
Line 6
End Line 7
Start Line 10
Line 11
End Line 12
Here is a sed version to do the same on OSX (BSD) sed (Based on Benjamin's sed command):
sed -n -e '/Start/{:a;' -e 'N;/End/!ba;' -e 'p;}' file
Personally, I prefer your awk solution, but:
sed -n -e '/start/,/end/H' -e '/end/{s/.*//; x; p}' input

How to reduce live log data?

A program produces a log file, which I am watching. Unfortunately, the log file includes sometimes 50 times the same Line 1.
Is there a possibility to get instead of
program.sh
Line 1
Line 1
Line 1
Line 1
...
Line 1
Line 1
Line 2
just something like:
program.sh
Line 1
\= repeated 43 times
Line 2
You can use this awk:
awk 'function prnt() { print p; if (c>1) print " \\= repeated " c " times"; }
p && p != $0{prnt(); c=0} {p=$0; c++}; END{prnt()}' file
Line 1
\= repeated 43 times
Line 2

head command to skip last few lines of file on MAC OSX

I want to output all lines of a file, but skip last 4, on Terminal.
As per UNIX man page following could be a solution.
head -n -4 main.m
MAN Page:
-n, --lines=[-]N
print the first N lines instead of the first 10; with the lead-
ing '-', print all but the last N lines of each file
I read man page here. http://unixhelp.ed.ac.uk/CGI/man-cgi?head
But on MAC OSx I get following error.
head: illegal line count -- -4
What else can be done to achieve this goal?
GNU version of head supports negative numbers.
brew install coreutils
ghead -n -4 main.m
Use awk for example:
$ cat file
line 1
line 2
line 3
line 4
line 5
line 6
$ awk 'n>=4 { print a[n%4] } { a[n%4]=$0; n=n+1 }' file
line 1
line 2
$
It can be simplified to awk 'n>=4 { print a[n%4] } { a[n++%4]=$0 }' but I'm not sure if all awk implementations support it.
A Python one-liner:
$ cat foo
line 1
line 2
line 3
line 4
line 5
line 6
$ python -c "import sys; a=[]; [a.append(line) for line in sys.stdin]; [sys.stdout.write(l) for l in a[:-4]]" < foo
line 1
line 2

Resources