Cygwin 64-bit C compiler caching funny (and ending early) - windows

We've been using CygWin (/usr/bin/x86_64-w64-mingw32-gcc) to generate Windows 64-bit executable files and it has been working fine through yesterday. Today it stopped working in a bizarre way--it "caches" standard output until the program ends. I wrote a six line example
that did the same thing. Since we use the code in batch, I wouldn't worry except when I run a test case on the now-strangely-caching executable, it opens the output files, ends early, and does not fill them with data. (The same code on Linux works fine, but these guys are using Windows.) I know it's not gorgeous code, but it demonstrates my problem, printing the numbers "1 2 3 4 5 6 7 8 9 10" only after I press the key.
#include <stdio.h>
main ()
{
char q[256];
int i;
for (i = 1; i <= 10; i++)
printf ("%d ", i);
gets (q);
printf ("\n");
}
Does anybody know enough CygWin to help me out here? What do I try? (I don't know how to get version numbers--I did try to get them.) I found a 64-bit cygwin1.dll in /cygdrive/c/cygwin64/bin and that didn't help a bit. The 32-bit gcc compilation works fine, but I need 64-bit to work. Any suggestions will be appreciated.
Edit: we found and corrected an unexpected error in the original code that caused the program not to populate the output files. At this point, the remaining problem is that cygwin won't show the output of the program.
For months, the 64-bit executable has properly generated the expected output, just as the 32-bit version did. Just today, it has started exhibiting the "caching" behavior described above. The program sends many hundreds of lines with many newline characters through stdout. Now, when the 64-bit executable is created as above, none of these lines are shown until the program completes and the entire output it printed at once. Can anybody provide insight into this problem?

This is quite normal. printf outputs to stdout which is a FILE* and is normally line buffered when connected to a terminal. This means you will not see any output until you write a newline, or the internal buffer of the stdout FILE* is full (A common buffer size is 4096 bytes).
If you write to a file or pipe, output might be fully buffered, in which case output is flushed when the internal buffer is full and not when you write a newline.
In all cases the buffers of a FILE* are flushed when: you call fflush(..). You call fclose(..) or the program ends normally.
Your program will behave the same on windows/cygwin as on linux.
You can add a call to fflush(stdout) to see the output immediately.
for (i = 1; i <= 10; i++) {
printf ("%d ", i);
fflush(stdout);
}
Also, do not use the gets() function.
If your real programs "ends early" and does not write data in text files that it's supposed to, it may be it crashes due to a bug of yours before it finishes, in which case the buffered output will not be flushed out. Or, more unlikely, you call the _exit() function, which will terminate the program without flushing FILE* buffers (in contrast to the exit() function)

Related

Mac M1 `cp`ing binary over another results in crash

Recently, I've been observing an issue that happens after copying a binary file over another binary file without first deleting it on my M1. After some experimentation (after hitting this issue), I've come up with a reproducible method of hitting this issue on Apple's new hardware on the latest 11.3 release of Big Sur.
The issue happens when copying a differing binary over another binary after they have been run at least once. Not sure what is causing this issue, but it's very perplexing and could potentially lead to some security issues.
For example, this produces the error:
> ./binaryA
# output A
> ./binaryB
# output B
> cp binaryA binaryB
> ./binaryB
Killed: 9
Setup
In order to reproduce the above behavior, we can create two simple C files with the following contents:
// binaryA.c
#include<stdio.h>
int main() {
printf("Hello world!");
}
// binaryB.c
#include<stdio.h>
const char s[] = "Hello world 123!"; // to make sizes differ for clarity
int main() {
printf("%s", s);
}
Now, you can run the following commands and get the error described (the programs must be run before the issue can be reproduced, so running the programs below is necessary):
> gcc -o binaryA binaryA.c
> gcc -o binaryB binaryB.c
> ./binaryA
Hello world!
> ./binaryB
Hello world 123!
> cp binaryA binaryB
> ./binaryB
Killed: 9
As you can see, the binaryB binary no longer works. For all intents and purposes, the two binaries are equal but one runs and one doesn't. A diff of both binaries returns nothing.
I'm assuming this is some sort of signature issue? But it shouldn't be because both binaries are not signed anyways.
Does anyone have a theory behind this behavior or is it a bug? Also, if it is a bug, where would I even file this?
Whenever you update a signed file, you need to create a new file.
Specifically, the code signing information (code directory hash) is hung off the vnode within the kernel, and modifying the file behind that cache will cause problems. You need a new vnode, which means a new file, that is, a new inode. Documented in WWDC 2019 Session 703 All About Notarization - see slide 65.
This is because Big Sur on ARM M1 processor requires all code to be validly signed (if only ad hoc) or the operating system will not execute it, instead killing it on launch.
While Trev's answer is technically correct (best kind of correct?), the likely answer is also that this is a bug in cp - or at least an oversight in the interaction between cp and the security sandbox, which is causing a bad user experience (and bad UX == bug in my book, no matter the intention)
I'm going to take a wild guess (best kind of guess!) and posit that when this was first implemented, someone hooked into the inode deletion as a trigger for resetting the binary signature state. It is very possible that, at the time that they implemented this, cp actually removed/destructively replaced the vnode/inode as part of the copy, so everything worked great. Then, at some point, someone else went and optimized cp to no longer be a destructive inode operation - and this is how the best bugs come to be!

Output for CLion IDE sometimes cuts off when executing a program

When using CLion I have found the output sometimes cuts off.
For example when running the code:
main.cpp
#include <stdio.h>
int main() {
int i;
for (i = 0; i < 1000; i++) {
printf("%d\n", i);
}
fflush(stdout); // Shouldn't be needed as each line ends with "\n"
return 0;
}
Expected Output
The expected output is obviously the numbers 0-999 on each on a new line
Actual Output
After executing the code multiple times within CLion, the output often changes:
Sometimes it executes perfectly and shows all the numbers 0-999
Sometimes it cuts off at different points (e.g. 0-840)
Sometimes it doesn't output anything
The return code is always 0!
Screenshot
Running the code in a terminal (i.e. not in CLion itself)
However, the code outputs the numbers 0-999 perfectly when compiling and running the code using the terminal.
I have spent so much time on this thinking it was a problem with my code and a memory issue until I finally realised that this was just an issue with CLion.
OS: Ubuntu 14.04 LTS
Version: 2016.1
Build: #CL-145.258
Update
A suitable workaround is to run the code in debug mode (thanks to #olaf).
The consensus is that this is an IDE issue. Therefore, I have reported the bug.
A suitable workaround is to execute the code in debug mode (no breakpoint required).
I will update this question, as soon as this bug is fixed.
Update 1
WARNING: You should not change information in registry unless you have been asked specifically by JetBrains. Registry is not in the main menu for a reason! Use the following solution at your own risk!!!
JetBrains have contacted me and provided a suitable solution:
Go to the Find Action Dialog box (CTRL+SHIFT+A)
Search for "Registry..."
Untick run.processes.with.pty
Should then work fine!
Update 2
The bug has been added here:
https://youtrack.jetbrains.com/issue/CPP-6254
Feel free to upvote it!

Python subprocess Popen: Send binary data to C++ on Windows

After three days of intensive googleing and stackoverflowing I more or less got my program to work. I tried a lot of stuff and found a lot of answers somehow connected to my problem, but no working solution. Sry should I have missed the right page!! I'm looking forward to comments and recommendations.
Task:
Send binary data (floats) from python to C++ program, get few floats back
Data is going to be 20ms soundcard input, latency is a bit critical
Platform: Windows (only due to drivers for the soundcard...)
Popen with pipes, but without communicate, because I want to keep the C++ program opened
The whole thing worked just fine on ubuntu with test data. On Windows I ran into the binary stream problem: Windows checks the float stream for EOF character and finds it randomly. Then everything freezes, waiting for instream data which is just behind the "eof" wall. Or so I picture it.
In the end these two things were necessary:
#include <io.h>
#include <fcntl.h>
and
if (_setmode(_fileno(stdin), _O_BINARY) == -1)
{cout << "binary mode problem" << endl; return 1;};
in C++ as described here: https://msdn.microsoft.com/en-us/library/aa298581%28v=vs.60%29.aspx.
cin.ignore() freezes using binary mode! Guess since there's no eof anymore. Did not try/think about this too thoroughly though
cin.read(mem,sizeof(float)*length) does the job, since I know the length of the data stream
Compiled with MinGW
and in the Python code same thing! (forgot this first, cost me a day):
if sys.platform.find("win") > -1:
import msvcrt,os
process = subprocess.Popen("cprogram.exe",stdin=subprocess.PIPE,stdout=subprocess.PIPE,bufsize=2**12)
msvcrt.setmode(process.stdin.fileno(), os.O_BINARY)
and
process.stdin.write(data.tostring())

MPI : Wondering about process sequence and for-loops

I am trying to build something with MPI, so since i am not very familiar with it, i started with some arrays and printing stuff. I noticed that a plain C command (not an MPI one) works simultaneously on every process, i.e. printing something like that :
printf("Process No.%d",rank);
Them i noticed that the numbers of the processes got all scrambled and because the right sequence of the processes would fit me, i tried using a for-loop like that :
for(rank=0; rank<processes; rank++) printf("Process No.%d",rank);
And that started a third world war in my computer. Lots of strange errors in a strange format that i couldn't understand and that made me suspicious. How is it possible since an if-loop stating a ranks value , like the master rank:
if(rank==0) printf("Process No.%d",rank);
cant use a for-loop for the same reason. Well, that is my first question.
My second question is about an other for-loop i used, that it got ignored.
printf("PROCESS --------------->**%d**\n",id);
for (i = 0; i < PARTS; ++i){
printf("Array No.%d\n", i+1);
for (j = 0; j < MAXWORDS; ++j)
printf("%d, ",0);
printf("\n\n");
}
I run that for-loop and every process printed only the first line:
$ mpiexec -n 6 `pwd`/test
PROCESS --------------->**0**
PROCESS --------------->**1**
PROCESS --------------->**3**
PROCESS --------------->**2**
PROCESS --------------->**4**
PROCESS --------------->**5**
And not the following amount of zeros (there was an array there at first that i removed cause i was trying to figure out why it didn't get printed).
So, why is it about MPI and for-loops that don't get along?
--edit 1: grammar
--edit 2: Code paste
It is not the same as above, but same problem in the the last for-loop with fprintf.
This is a paste zone, sorry for that, i couldn't deal with the code system here
--edit 3: fixed
Well i finally figured it out. For first i have to say that the fprintf function when used inside MPI is a mess. Apparently there is a kind of overlap while every process writes in a text file. I tested it with the printf function and it worked. The second thing i was doing is, i was calling the MPI_Scatter function from inside root:
if(rank==root) MPI_Scatter();
..which only scatters the data inside the process and not the others.
Now that i have fixed those two issues, the program works as it should, apart a minor problem when i printf the my_list arrays. It seems like every array has a random number of inputs, but when i tested using a counter for every array, it's only the data that is printed like this. Tried using fflush(stdout); but it returned me an error.
usr/lib/gcc/x86_64-pc-linux-gnu/4.2.2/../../../../x86_64-pc-linux-gnu/bin/ld: final link failed: `Input/output error collect2: ld returned 1 exit status`
MPI in and of itself does not have a problem with for loops. However, just like with any other software, you should always remember that it will work the way you code it, not the way you intend it. It appears that you are having two distinct issues, both of which are only tangentially related to MPI.
The first issue is that the variable PARTS is defined in such a way that it depends on another variable, procs, which is not initialized at before-hand. This means that the value of PARTS is undefined, and probably ends up causing a divide-by-zero as often as not. PARTS should actually be set after line 44, when procs is initialized.
The second issue is with the loop for(i = 0; i = LISTS; i++) labeled /*Here is the problem*/. First of all, the test condition of the loop always sets i to the value of LISTS, regardless of the initial value of 0 and the increment at the end of the loop. Perhaps it was intended to be i < LISTS? Secondly, LISTS is initialized in a way that depends on PARTS, which depends on procs, before that variable is initialized. As with PARTS, LISTS must be initialized after the statement MPI_Comm_size(MPI_COMM_WORLD, &procs); on line 44.
Please be more careful when you write your loops. Also, make sure that you initialize variables correctly. I highly recommend using print statements (for small programs) or the debugger to make sure your variables are being set to the expected values.

OpenMP C and C++ cout/printf does not give the same output

I am a complete noob in OpenMP and just started by exploring some simple test script below.
#pragma omp parallel
{
#pragma omp for
for(int i=0;i<10;++i)
std::cout<<i<<" "<<endl;
// printf("%d \n",i);
}
}
I tried the C and C++ version and the C version seems to work fine whereas the C++ version gives me a wrong output.
Many implementations of printf acquire a lock to ensure that each printf call is not interrupted by other threads.
In contrast, std::cout's overloaded << operator means that (even with a lock) one thread's printing of i and ' ' and '\n' can be interleaved with another thread's output, because std::cout<<i<<" "<<endl; is translated to three operator<<() function calls by the C++ compiler.
This is outdated but perhaps this could be still of help to anyone:
It's not really clear what you expect the output to be but be aware of:
Your variable "i" is possibly shared amongst threads. You have a race-condition for the contents of "i". One thread needs to wait for another when it wants to access "i". Further 1 thread can change "i" and another thread doesn't take note of it meaning it will output a wrong value.
The endl() flushes the memory after ending the line. If you use \n for newline the effect is similar but without the flush. And std is an object too so multiple threads race for std access. When the memory isn't flushed after every access you may experience interferences.
To make sure those are not related to your problems you could declare the "i" as private so every thread counts "i" itself. And you could play with flushing the memory at output so see if it has to do with the problem you experience.

Resources