Output for CLion IDE sometimes cuts off when executing a program - clion

When using CLion I have found the output sometimes cuts off.
For example when running the code:
main.cpp
#include <stdio.h>
int main() {
int i;
for (i = 0; i < 1000; i++) {
printf("%d\n", i);
}
fflush(stdout); // Shouldn't be needed as each line ends with "\n"
return 0;
}
Expected Output
The expected output is obviously the numbers 0-999 on each on a new line
Actual Output
After executing the code multiple times within CLion, the output often changes:
Sometimes it executes perfectly and shows all the numbers 0-999
Sometimes it cuts off at different points (e.g. 0-840)
Sometimes it doesn't output anything
The return code is always 0!
Screenshot
Running the code in a terminal (i.e. not in CLion itself)
However, the code outputs the numbers 0-999 perfectly when compiling and running the code using the terminal.
I have spent so much time on this thinking it was a problem with my code and a memory issue until I finally realised that this was just an issue with CLion.
OS: Ubuntu 14.04 LTS
Version: 2016.1
Build: #CL-145.258
Update
A suitable workaround is to run the code in debug mode (thanks to #olaf).

The consensus is that this is an IDE issue. Therefore, I have reported the bug.
A suitable workaround is to execute the code in debug mode (no breakpoint required).
I will update this question, as soon as this bug is fixed.
Update 1
WARNING: You should not change information in registry unless you have been asked specifically by JetBrains. Registry is not in the main menu for a reason! Use the following solution at your own risk!!!
JetBrains have contacted me and provided a suitable solution:
Go to the Find Action Dialog box (CTRL+SHIFT+A)
Search for "Registry..."
Untick run.processes.with.pty
Should then work fine!
Update 2
The bug has been added here:
https://youtrack.jetbrains.com/issue/CPP-6254
Feel free to upvote it!

Related

Mac M1 `cp`ing binary over another results in crash

Recently, I've been observing an issue that happens after copying a binary file over another binary file without first deleting it on my M1. After some experimentation (after hitting this issue), I've come up with a reproducible method of hitting this issue on Apple's new hardware on the latest 11.3 release of Big Sur.
The issue happens when copying a differing binary over another binary after they have been run at least once. Not sure what is causing this issue, but it's very perplexing and could potentially lead to some security issues.
For example, this produces the error:
> ./binaryA
# output A
> ./binaryB
# output B
> cp binaryA binaryB
> ./binaryB
Killed: 9
Setup
In order to reproduce the above behavior, we can create two simple C files with the following contents:
// binaryA.c
#include<stdio.h>
int main() {
printf("Hello world!");
}
// binaryB.c
#include<stdio.h>
const char s[] = "Hello world 123!"; // to make sizes differ for clarity
int main() {
printf("%s", s);
}
Now, you can run the following commands and get the error described (the programs must be run before the issue can be reproduced, so running the programs below is necessary):
> gcc -o binaryA binaryA.c
> gcc -o binaryB binaryB.c
> ./binaryA
Hello world!
> ./binaryB
Hello world 123!
> cp binaryA binaryB
> ./binaryB
Killed: 9
As you can see, the binaryB binary no longer works. For all intents and purposes, the two binaries are equal but one runs and one doesn't. A diff of both binaries returns nothing.
I'm assuming this is some sort of signature issue? But it shouldn't be because both binaries are not signed anyways.
Does anyone have a theory behind this behavior or is it a bug? Also, if it is a bug, where would I even file this?
Whenever you update a signed file, you need to create a new file.
Specifically, the code signing information (code directory hash) is hung off the vnode within the kernel, and modifying the file behind that cache will cause problems. You need a new vnode, which means a new file, that is, a new inode. Documented in WWDC 2019 Session 703 All About Notarization - see slide 65.
This is because Big Sur on ARM M1 processor requires all code to be validly signed (if only ad hoc) or the operating system will not execute it, instead killing it on launch.
While Trev's answer is technically correct (best kind of correct?), the likely answer is also that this is a bug in cp - or at least an oversight in the interaction between cp and the security sandbox, which is causing a bad user experience (and bad UX == bug in my book, no matter the intention)
I'm going to take a wild guess (best kind of guess!) and posit that when this was first implemented, someone hooked into the inode deletion as a trigger for resetting the binary signature state. It is very possible that, at the time that they implemented this, cp actually removed/destructively replaced the vnode/inode as part of the copy, so everything worked great. Then, at some point, someone else went and optimized cp to no longer be a destructive inode operation - and this is how the best bugs come to be!

Expression must have a constant value error in array via MPI world size

Recently, i started learning about MPI programming and I have tried to program it on both Linux and Windows OS. I do not have any problem running the MPI application on Linux, however, i stumbled upon expression must have a constant value error on Visual Studio
For example, i'm trying to get the world_size via the MPI_Comm_size(MPI_COMM_WORLD, &world_size); and create an array based on the world_size (for example)
Code Sample :
#include <mpi.h>
int world_size;
MPI_Comm_size(MPI_COMM_WORLD, &world_size);
int database[world_size]; //error occured here
However, when i'm running it on Linux, it is working perfectly fine as i'm able to execute the code while stating the number of processes i wish to have. Am i missing out anything? I followed this particular youtube link that taught me how to install MS-MPI on my Visual Studio 2015.
Any help would be greatly appreciated.
Automatic array sizing using non const values actually works with gcc (https://gcc.gnu.org/onlinedocs/gcc/Variable-Length.html). However, it's considered a bad idea because (as you've just experienced) your code won't be portable anymore. You just need to change your code to create an array using new. You might want to generate an error to make sure your code is portable: Disable variable-length automatic arrays in gcc

Errors in OpenCL kernel code at runtime

I am new to Visual Studio and I am using it to write a simple parallel sorting program using OpenCL.
When I run it, I get a line before my output (i.e. from before I receive and print the result buffer) saying "5 Errors Generated.".
I assume this is telling me that I have errors in my kernel file, and if I deliberately write errors in my kernel file that number increases.
I would really like to know what those errors are so I can correct my program. Being unfamiliar with VS I simply cannot find them listed anywhere.
Does anyone know where I can find what errors are being generated.
Thanks
You need to call clGetProgramBuidlInfo asking for the CL_PROGRAM_BUILD_LOG in order to get the runtime errors of the compiler.
char result[4096];
size_t size;
clGetProgramBuildInfo( program, device, CL_PROGRAM_BUILD_LOG, sizeof(result), result, &size);
printf("%s\n", result);

Cygwin 64-bit C compiler caching funny (and ending early)

We've been using CygWin (/usr/bin/x86_64-w64-mingw32-gcc) to generate Windows 64-bit executable files and it has been working fine through yesterday. Today it stopped working in a bizarre way--it "caches" standard output until the program ends. I wrote a six line example
that did the same thing. Since we use the code in batch, I wouldn't worry except when I run a test case on the now-strangely-caching executable, it opens the output files, ends early, and does not fill them with data. (The same code on Linux works fine, but these guys are using Windows.) I know it's not gorgeous code, but it demonstrates my problem, printing the numbers "1 2 3 4 5 6 7 8 9 10" only after I press the key.
#include <stdio.h>
main ()
{
char q[256];
int i;
for (i = 1; i <= 10; i++)
printf ("%d ", i);
gets (q);
printf ("\n");
}
Does anybody know enough CygWin to help me out here? What do I try? (I don't know how to get version numbers--I did try to get them.) I found a 64-bit cygwin1.dll in /cygdrive/c/cygwin64/bin and that didn't help a bit. The 32-bit gcc compilation works fine, but I need 64-bit to work. Any suggestions will be appreciated.
Edit: we found and corrected an unexpected error in the original code that caused the program not to populate the output files. At this point, the remaining problem is that cygwin won't show the output of the program.
For months, the 64-bit executable has properly generated the expected output, just as the 32-bit version did. Just today, it has started exhibiting the "caching" behavior described above. The program sends many hundreds of lines with many newline characters through stdout. Now, when the 64-bit executable is created as above, none of these lines are shown until the program completes and the entire output it printed at once. Can anybody provide insight into this problem?
This is quite normal. printf outputs to stdout which is a FILE* and is normally line buffered when connected to a terminal. This means you will not see any output until you write a newline, or the internal buffer of the stdout FILE* is full (A common buffer size is 4096 bytes).
If you write to a file or pipe, output might be fully buffered, in which case output is flushed when the internal buffer is full and not when you write a newline.
In all cases the buffers of a FILE* are flushed when: you call fflush(..). You call fclose(..) or the program ends normally.
Your program will behave the same on windows/cygwin as on linux.
You can add a call to fflush(stdout) to see the output immediately.
for (i = 1; i <= 10; i++) {
printf ("%d ", i);
fflush(stdout);
}
Also, do not use the gets() function.
If your real programs "ends early" and does not write data in text files that it's supposed to, it may be it crashes due to a bug of yours before it finishes, in which case the buffered output will not be flushed out. Or, more unlikely, you call the _exit() function, which will terminate the program without flushing FILE* buffers (in contrast to the exit() function)

Crashes when using boost::random

I got a relatively serious problem with boost::random.
Background: I'm using TDM-GCC x64 on Windows 7 x64. Compiler options are -g -Wall -fexceptions
I build Boost using the same compiler enviroment, but that shouldn't matter when using random since it is header-only(?)
So now my problem:
I got this function:
#define PRNG_GENERATOR boost::mt19937
COORD function_g(int depth)
{
double _range;
_range = 1/(depth + 1.0f);
boost::uniform_real<double> range(-_range, _range);
boost::variate_generator<PRNG_GENERATOR&, boost::uniform_real<double> > v_png(*this->m_prng, range);
return v_png();
}
When I call this function my program crashes with an c0000026 Error in the ntdll.dll module.
The crash is always displayed by gdb on the first line of the ()-operator of the random number engine of boost (in this case it is in the file mersenne_twiseter.hpp at line 319 which is "if(i == n)" - not actually something I would expect to cause a crash).
And the even more strange thing is, that this crash just appeared - I didn't commited any code changes, just a (clean) recompile and every build after the first showing the crash just crashes....!?
I spend now about one hour searching the internet for this mysterious c0000026 error, but didn't found anything valueable....
Someone got a tip how to resolve this issue?
You haven't shown us how this->m_prng is initialized. Are you sure it points to a valid
object of type boost::mt19937? The rest of it looks OK, as far as I can tell.

Resources