Calling pthread_cond_destroy results in "Function not implemented" ENOSYS on macOS - macos

I am trying to make some Linux-based code run on macOS. It is the POSIX OSAL layer for NASA Core Flight System as found here: https://github.com/nasa/osal.
I am observing that the code uses POSIX conditions and in particular, there is a call like the following:
if (pthread_cond_destroy(&(sem->cv)) != 0) {
printf("pthread_cond_destroy %d %s\n", errno, strerror(errno)); // my addition
...
}
On macOS, the tests related to this code provided in the OSAL repository always fail because the call to pthread_cond_destroy always results in:
pthread_cond_destroy 78 Function not implemented
I have found an example in the Apple documentation which shows an example of Using Conditions (Threading Programming Guide / Synchronization / Using Conditions) and in that example there is no call to pthread_cond_destroy but I cannot make any conclusions on whether that call should be there or not because the example is simplified.
This is how the header looks like on my machine:
__API_AVAILABLE(macos(10.4), ios(2.0))
int pthread_cond_destroy(pthread_cond_t *);
I am wondering if pthread_cond_* functionality is simply missing on macOS and I have to implement a replacement for it or there is some way to make it work.
EDIT: The minimal example is working fine for me. The problem should be somewhere around the problematic code. What I still don't understand is why I am getting ENOSYS/78 error code, for one thing it is not mentioned on the man page man/3/pthread_cond_destroy:
#include <iostream>
#include <pthread.h>
int main() {
pthread_cond_t condition;
pthread_cond_init(&condition, NULL);
int result = pthread_cond_destroy(&condition);
assert(result == 0);
assert(errno == 0);
std::cout << "Hello, World!" << std::endl;
return 0;
}

Related

May the translation-function set with _set_se_translator just return without throwing?

May the translation-function set with _set_se_translator just return without throwing?
If so, would this mean that the further processing goes the way of normal SEH-processing?
[EDIT]: I tried it out myself:
#include <Windows.h>
#include <iostream>
#include <stdexcept>
using namespace std;
int main()
{
_set_se_translator( []( unsigned int, EXCEPTION_POINTERS * ) { } );
__try
{
RaiseException( EXCEPTION_IN_PAGE_ERROR, 0, 0, nullptr );
}
__except( EXCEPTION_EXECUTE_HANDLER )
{
cout << "caught" << endl;
}
}
Is this specified to work?
From the documentation (added emphasis mine):
Your translator function should do no more than throw a C++ typed
exception. If it does anything in addition to throwing (such as
writing to a log file, for example) your program might not behave as
expected because the number of times the translator function is
invoked is platform-dependent.
If we take this completely literally, then a translator function should not return, as this is doing something 'more' than throwing a typed exception. However, I can find no specific mention in that document (or any related ones) that the function should never return, and neither does the function's prototype specify the [[noreturn]] attribute (though that, in itself, may not mean very much).

why the output of the auto variable displays something not related to type?

I tried a example on Auto for variable initialization and STL in C++. For normal variable, type was printed using : typeid(var_name).name() to print i (integer) / d(float) / pi(pointer) which works fine.
But while working on STL,
`#include <iostream>
#include <vector>
using namespace std;
int main()
{
vector<string> st;
st.push_back("geeks");
st.push_back("for");
for (auto it = st.begin(); it != st.end(); it++)
cout << typeid(it).name() << "\n";
return 0;
}
`
which gives output like,
`N9__gnu_cxx17__normal_iteratorIPNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEESt6vectorIS6_SaIS6_EEEE
N9__gnu_cxx17__normal_iteratorIPNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEESt6vectorIS6_SaIS6_EEEE`
and I am unable to understand the output logic behind it, can anyone explain why it is giving output like this? and thanks in advance
That's the "name mangled" version of the name of the type of it. typeinfo::name() is not required by the standard to return a name in human-readable format (a shortcoming IMHO) and GCC doesn't do so.
To get the actual, human-readable name, you need to call the abi::__cxa_demangle() function provided by GCC, but note that this is non-portable so if your project needs to work on different compilers you'll need to wrap it appropriately.

How does KLEE count number of branches

I'm using Klee 2.9, and trying to obtain branch information from stat file klee generats. I fed in a one if-else statement program, and klee reported NumBranches as 8.
Code under test is shown below,
#include <stdio.h>
#include <stdbool.h>
int main(){
int a;
int b;
klee_make_symbolic(&a,sizeof(a),"a");
klee_make_symbolic(&b,sizeof(b),"b");
if (a / b == 1) {
printf("a==b\n");
}
else {
printf("a!=b\n");
}
return 0;
}
and file output run.stats in shown below,
('Instructions','FullBranches','PartialBranches','NumBranches','UserTime','NumStates','MallocUsage','NumQueries','NumQueryConstructs','NumObjects','WallTime','CoveredInstructions','UncoveredInstructions','QueryTime','SolverTime','CexCacheTime','ForkTime','ResolveTime',)
(0,0,0,8,5.609000e-03,0,528704,0,0,0,4.196167e-05,0,78,0.000000e+00,0.000000e+00,0.000000e+00,0.000000e+00,0.000000e+00)
(32,2,0,8,9.722000e-03,0,654176,3,56,0,3.826760e-01,27,51,3.799300e-01,3.802470e-01,3.801040e-01,6.900000e-05,0.000000e+00)
Can anyone explain me how does 8 come from?
Two possible reasons:
"klee_make_symbolic" and "printf" contains conditional statements. When KLEE executes the program, it does not differentiate your functions from external functions.
If you run KLEE with "--libc=uclibc", the main function will be replaced with "__uclibc_main". "__uclibc_main" first do some initialization works and then call the original "main" function. The initialization might contain some conditional statements.
You need to check the version of KLEE and the commands you used.

Using OpenGL Vertex Buffer Objects with Dynamically linked OpenGL from Windows

I am working on setting up a basic OpenGL application by dynamically linking the opengl32.dll file pre-packaged with Windows (That part is non-optional). However I am having quite a lot of difficulty getting procedure addresses for the functions related to Vertex Buffer Objects.
My initial investigations have revealed that windows only exposes the OpenGL 1.1 specification at first, and wglGetProcAddress calls need to be used to get any functions more recent than that. So I modified my code to attempt that method as well. I am using glGenBuffers as my example case, and have attempted four different attempts to load it, and all fail. I have also used glGetString to check my version number which is reported as major version 4, so I doubt it lacks VBO support.
How should I be getting the proc addresses for these VBO functions?
A minimized example of the code I'm dealing with is here:
#include <iostream>
#include "windows.h"
using namespace std;
int main()
{
//Load openGL and get necessary functions
HINSTANCE hDLL = LoadLibrary("opengl32.dll");
PROC WINAPI(*winglGetProcAddress)(LPCSTR);
void(*genBuffers)(int, unsigned int*);
if(hDLL)
{
winglGetProcAddress = (PROC WINAPI(*)(LPCSTR))GetProcAddress(hDLL, "wglGetProcAddress");
if(winglGetProcAddress == NULL){cout << "wglGetProcAddress not found!" << endl; return 0;}
genBuffers = (void(*)(int, unsigned int*))GetProcAddress(hDLL, "glGenBuffers");
if(genBuffers == NULL){genBuffers = (void(*)(int, unsigned int*))winglGetProcAddress("glGenBuffers");}
}
else
{cout << "This application requires Open GL support." << endl; return 0;}
//glGenBuffers not supported, fallback to glGenBuffersARB
if(genBuffers == NULL)
{
genBuffers = (void(*)(int, unsigned int*))GetProcAddress(hDLL, "glGenBuffersARB");
if(genBuffers == NULL){genBuffers = (void(*)(int, unsigned int*))winglGetProcAddress("glGenBuffersARB");}
if(genBuffers == NULL)
{cout << "Could not locate glGenBuffers or glGenBuffersARB in opengl32.dll." << endl; return 0;}
}
//get a Vertex Buffer Object
unsigned int a[1];
genBuffers(1, a);
//cleanup
if(!FreeLibrary(hDLL))
{cout << "Failed to free the opengl32.dll library." << endl;}
return 0;
}
When run, it loads the library and get's the wglGetProcAddress correctly, but then outputs the "Could not locate glGenBuffers or glGenBuffersARB in opengl32.dll." error, indicating it failed to get either "glGenBuffers" or "glGenBuffersARB" using either "GetProcAddress" or "wglGetProcAddress".
Alternatively, if this does mean I do not have VBO support, will a driver update help, or is it even possible to get it supported? I'd really rather not use deprecated immediate mode calls.
I am running this in Code::Blocks, on Windows XP, Intel Core i5, with an NVIDIA GeForce GTX 460.

How to handle seg faults under Windows?

How can a Windows application handle segmentation faults? By 'handle' I mean intercept them and perhaps output a descriptive message. Also, the ability to recover from them would be nice too, but I assume that is too complicated.
Let them crash and let the Windows Error Reporting handle it - under Vista+, you should also consider registering with Restart Manager (http://msdn.microsoft.com/en-us/library/aa373347(VS.85).aspx), so that you have a chance to save out the user's work and restart the application (like what Word/Excel/etc.. does)
Use SEH for early exception handling,
and use SetUnhandledExceptionFilter to show a descriptive message.
If you add the /EHa compiler argument then try {} catch(...) will catch all exceptions for you, including SEH exceptions.
You can also use __try {} __except {} which gives you more flexibility on what to do when an exception is caught. putting an __try {} __except {} on your entire main() function is somewhat equivalent to using SetUnhandeledExceptionFilter().
That being said, you should also use the proper terminology: "seg-fault" is a UNIX term. There are no segmentation faults on Windows. On Windows they are called "Access Violation Exceptions"
C++ self-contained example on how to use SetUnhandledExceptionFilter, triggering a write fault and displaying a nice error message:
#include <windows.h>
#include <sstream>
LONG WINAPI TopLevelExceptionHandler(PEXCEPTION_POINTERS pExceptionInfo)
{
std::stringstream s;
s << "Fatal: Unhandled exception 0x" << std::hex << pExceptionInfo->ExceptionRecord->ExceptionCode
<< std::endl;
MessageBoxA(NULL, s.str().c_str(), "my application", MB_OK | MB_ICONSTOP);
exit(1);
return EXCEPTION_CONTINUE_SEARCH;
}
int main()
{
SetUnhandledExceptionFilter(TopLevelExceptionHandler);
int *v=0;
v[12] = 0; // should trigger the fault
return 0;
}
Tested successfully with g++ (and should work OK with MSVC++ as well)
What you want to do here depends on what sort of faults you are concerned with. If you have sloppy code that is prone to more or less random General Protection Violations, then #Paul Betts answer is what you need.
If you have code that has a good reason to deference bad pointers, and you want to recover, start from #whunmr's suggestion about SEH. You can handle and indeed recover, if you have clear enough control of your code to know exactly what state it is in at the point of the fault and how to go about recovering.
Similar to Jean-François Fabre solution, but with Posix code in MinGW-w64. But note that the program must exit - it can't recover from the SIGSEGV and continue.
#include <stdio.h>
#include <signal.h>
#include <stdlib.h>
void sigHandler(int s)
{
printf("signal %d\n", s);
exit(1);
}
int main()
{
signal(SIGSEGV, sigHandler);
int *v=0;
*v = 0; // trigger the fault
return 0;
}

Resources