setvbuf (stdout, NULL, _IOLBF, 0);
This line is present in one of my codes and I am not able to figure out why. If I comment out this line, my app works fine. But if I keep it, then my app crashes. Also read http://msdn.microsoft.com/en-us/library/86cebhfs(v=vs.100).aspx, but not able to understand properly, as to why the application is crashing.
Please help.
Thanks
Part of the MSDN description for setvbuf() says:
So, on Windows, you will get the same effect as _IOFBF, which, as it says in the text, will use an automatically allocated buffer of the size you specified.
In your code, you specified a size of zero. Hence the crash.
On non-Win32 systems, the same command would activate line buffering and so possibly would be ok. Perhaps this code is multi-platform?
Related
Today I was just experimenting with Linux shell. Here is what I did.
I wrote the output of clear to a file :
clear > clear.txt
Now I have some contents on the shell, (non-empty). Then I try to cat the contents of clear.txt
cat clear.txt
For my surprise, the entire screen got cleared. Can somone explain me why? if this is true, why cant we do the same for all the commands?
clear works very simply by transmitting a sequence of control codes which are interpreted by your terminal. The magic of actually clearing the screen is handled by your terminal (or terminal emulator, or console interface, or whatever you happen to be using) in response to receiving these control codes.
See also e.g. https://en.wikipedia.org/wiki/ANSI_escape_code
Well it's not a real explanation but it's how it is supposed to work:
http://man7.org/linux/man-pages/man1/clear.1.html
#CLEAR# writes to the standard output. You can redirect the standard
output to a file (which prevents #CLEAR# from actually clearing the
screen), and later cat the file to the screen, clearing it at that
point.
It is a visual representation of the clear command. Editing the text file yields:
^[[3J^[[H^[[2J
source: https://unix.stackexchange.com/questions/400142/terminal-h2j-caret-square-bracket-h-caret-square-bracket-2-j
It is my understanding that by default, Character is Latin_1, Wide_Character is UCS-2, and Wide_Wide_Character is UCS-4, but that GNAT can have specified pragma Wide_Character_Encoding(UTF8); or -gnatW8 and that those characters and their strings will be UTF-8 encoded instead.
At least on Linux and FreeBSD, the results fit with my expectations. But on Windows the results are odd.
For either Wide or Wide_Wide variants, once a character moves beyond the ASCII set, I get a garbled mess. I beleive this is called emojibake by some. So I figured it was a codepage issue. After all, the default codepage in Windows, and therefore what the Console Host would load with, is 437 which isn't the UTF-8 codepage. chcp 65001 and now instead of the mess of extra characters, there's an immediate exception raised ADA.IO_EXCEPTIONS.DEVICE_ERROR : a-ztexio.adb:1295. Looking at where the exception occurred, it seems to be in the putc binding of fputc(). But this is Standard_Output, shouldn't an EOF never happen?
Is there some kind of special consideration Windows needs? How can I get UTF-8 output?
edit:
I tried piping the output into a text file. The supposed UTF-8 encoded program still generates emojibake in the file. Not sure why this would immediately throw an exception in the console though.
So then I tried directly opening and writing to a file instead of the console/pipe. Oddly this works exactly as it should. The text is completely correct.
I've never seen this kind of behavior with any other language, so it should still be possible to get proper UTF-8 at the console, right?
The deficiency so many others, not just here, describe in the Windows Console Host has either been fixed or never existed in the first place. Based on this document, I feel it was probably always very misunderstood. Windows doesn't treat the console like files, and it's easy to fall into that trap.
Using this very straight forward code, along with what Windows needs and expects behind the scenes...
It correctly produces the following, as long as either pragma Wide_Character_Encoding(UTF8); or -gnatW8 is used.
Piping the output of this test program into a file works as it should. Similarly, piping the output of this test program into another program works as it should. And also similarly, taking the file from piped output, and piping it into another program works as it should.
Full UTF-8 behavior as one would expect under Linux, on Windows.
What needs to be done is twofold. In the package initializer, the Console Host needs to be told what it's working with, which can be done like this.
Character output is then done through fputwc. According to MS Docs fputc should never be used for UNICODE on Windows, which is part of the problem GNAT has. String output and character/string input is all similar.
Based on others comments and some further research to confirm, I'm pretty sure this is a deficiency of the Windows Console Host.
edit: don't listen to this
Relative Emacs newbie here, just trying to adapt my programming workflow to fit with emacs. So far I've discovered shell-pop and I'm quite enjoying on-demand terminals that pop up when needed for banging out the odd commands.
What I understand so far about Emacs is that shell is a "dumb" terminal that doesn't support any ansi control codes, and that makes it incompatible with things like ncurses that attempt to draw complex UI's on a terminal emulator. This is why you can't use less or top or similar in shell-mode.
However, I seem to be having trouble with ansi-term, it's not the be-all, end-all that it's cracked up to be. Sure, it has no problems running less or git log or even nano, but there are a few things that can't quite seem to display properly when they're running in an ansi-term, such as apt-get and nosetests. I'm not sure quite what the name is for it, but apt-get's output is characterised by live-updating what is displayed on the very last line, and then having unchanging lines of text scroll out above that line. It seems to be halfway between something like less and something dumber, like cat. Somehow ansi-term doesn't like this at all, and I get very garbled output, where it seems to output everything on one line only or just generally lose it's place and output things all over, randomly. In the case of nosetests, it starts off ok, but if any libraries spew out any STDERR, the output all goes to hell in a similar way.
With some fiddling it seems possible to fix this by mashing C-l and RET, but it's not always reliable.
Does anybody know what's going on here? Is there some way to fix ansi-term so that it can display everything properly? Or is there perhaps some other mode that I don't know about that is way better? Ideally I'd like something that "just works" as effortlessly as, eg, Gnome Terminal, which can run all of the above mentioned programs without a single hiccup.
Thanks!
I resolved this issue by commenting out my entire .emacs.el and then uncommenting and restarting emacs for every single line in the file. I discovered that the following line alone was responsible for the issue:
'(fringe-mode 0 nil (fringe))
(this line disables the fringes from inside custom-set-variables).
I guess this is a bug in Emacs, that disabling the fringe causes term-mode to garble it's output really badly whenever any output line exceeds $COLUMN columns.
Anyway, I don't really like the fringes much at all, and it seems I was able to at least disable the left fringe without triggering this issue:
(set-fringe-mode (cons 0 8))
Maybe apt-get does different things based on the $TERM environment variable. What happens if you set TERM=dumb? If that makes things work, then you can experiment with different values until you find one that supports enough features but still works.
Note that git 2.0.1 (June 25th, 2014) now better detects dumb terminal when displaying verbose messages.
That might help Emacs better display some of the messages received from git, but the fringe-mode bug reported above is certainly the main cause.
See commit 38de156 by Michael Naumov (mnaoumov)
sideband.c: do not use ANSI control sequence on non-terminal
Diagnostic messages received on the sideband #2 from the server side are sent to the standard error with ANSI terminal control sequence "\033[K" that erases to the end of line appended at the end of each line.
However, some programs (e.g. GitExtensions for Windows) read and interpret and/or show the message without understanding the terminal control sequences, resulting them to be shown to their end users.
To help these programs, squelch the control sequence when the standard error stream is not being sent to a tty.
I'm writing a command interpreter like BASH, and a \ followed by a newline implies a continuation of the input stream; how can I implement that in Win32?
If I use the console mode with ENABLE_LINE_INPUT, then the user can't press backspace in order to go back to the previous line; Windows prevents him from doing so. But if I don't set ENABLE_LINE_INPUT, then I have to manually reposition the cursor, which is rather tedious given that (1) the user might have redirected the input stream, and that (2) it might be prone to race conditions, and I'd rather have Windows do it if I can.
Any way to have my newline and eat it too?
Edit:
If this would require undocumented CSRSS port requests, then I'm still interested!
Assuming you want this to run in a window, like command prompt does by default, rather than full screen, you could create a GUI application with a large textbox. Users would type into the textbox, and you could parse whatever was entered, and output to the same box (effectively emulated the Win32 Console).
This way whatever rules you want to set for how the console behaves is completely up to you.
I might be mistaken is saying this, but I believe that the Win32 Console from XP onward works exactly like this, and it just listens for output on stdout; there shouldn't be any reason you can't do the same.
Hope this was helpful.
I have a Win32 application written in C that can have its console output via printf() redirected to a log file.
It would be nice if I could have my app. detect if it had been started with or without a redirect '>'.
Any ideas?
Tom, thanks for the input.
I did some experiments and found this works for me..
fpost_t pos ;
fgetpos (stdout, & pos) ;
When an application's output is being redirected to a file, fgetpos() sets 'pos' to zero. It makes sense since its freshly opened stderr for you. EDIT: Actually, the value returned may be a positive integer if text has already been redirected to the log/file. So in your code you'd have something like "if (pos >= 0) bfRedirected = TRUE ;"
When an application's output is not being redirected - it's going to the console device - not a file, so fgetpos() will set 'pos' to -1.
I think that pipes are blind by nature, so you don't have a built-in way to tell whether you're being redirected. Moreover, trying to find out what's happening behind an abstraction layer is a bad habit.
If you really need to control your output, add the log file name as a command line parameter!
That being said, you can make some smart guesswork to find out:
A program can query the shell command history to find out the most recent commands executed.
If you know the path to the logfiles, you can scan that directory and see if a file has been created or changed its size.
Benchmark writing speed when redirected and not redirected. This would work only if your system is ultra-stable, and environment condition won't change.
There may be a way to do this - a quick google yielded this hit that might give you the hint in the right direction.
Method from Microsoft: WriteConsole fails if it is used with a standard handle that is redirected to a file. If an application processes multilingual output that can be redirected, determine whether the output handle is a console handle (one method is to call the GetConsoleMode function and check whether it succeeds).
AFAIK the answer to this is that you can't. Output redirection works by simply reading the stream of output from a given program and redirecting it to another pipe / stream. The design of file / streams is such that the writer is ignorant of the reader to the point that you shouldn't know you are being read.
Even detecting that there was a reader would be of no use because there is one in the normal case. The console is reading the output of your program and displaying it to the screen already.