Conditionally include GLES2 where needed - opengl-es

I'm trying to use functions such as glCreateShader, glGetShaderiv, glDeleteProgram but they don't exist when I import GL/gl.h or GL/glext.h on a Linux system.
This is probably because its a laptop from 2008 with the integrated graphics reporting:
2.1 Mesa 20.1.10
and the discrete card:
3.0 Mesa 20.1.10
When I import GLES2/gl2.h then everything works fine and the includes are resolved.
Now I don't currently have access to a desktop computer or one with sufficiently higher graphics standard for at least a month - I'm assuming that these better systems have shader functionality included in the standard GL/gl.h file.
How can I write a conditional import to require GLES only where needed? I don't want to have to add a variable to the Makefile, I want to be able to automatically be able to detect a system where these symbols are only available as GLES components at compile time with #ifdef sequences.

You have to use an OpenGL loader like glew or glad. OpenGL (ES) is a specification. The loader gives you access to the API functions which are provided by the graphics driver.
See OpenGL Loading Library - glad:
const SDL_GLContext context = SDL_GL_CreateContext(window);
if (context == nullptr) {
std::cout << "SDL could not create context";
return 1;
}
if (!gladLoadGLLoader((GLADloadproc)glfwGetProcAddress))
{
std::cout << "Failed to initialize OpenGL context" << std::endl;
return -1;
}
See Initialize GLEW:
const SDL_GLContext context = SDL_GL_CreateContext(window);
if (context == nullptr) {
std::cout << "SDL could not create context";
return 1;
}
if {glewInit() != GLEW_OK}
{
std::cout << "Failed to initialize GLEW" << std::endl;
return -1;
}

Related

win32 (Desktop Duplication API) IDXGIOutput1::DuplicateOutput() results in Access Violation within dxgi.dll

I'm writing a small c++ app that uses Desktop Duplication API to get the display output. I've never done c programming before, and I got to where I am by staring at the win32 API documentation. https://learn.microsoft.com/en-us/windows/win32/api/dxgi1_2/
#include <iostream>
#pragma comment(lib, "windowsapp")
#include <roapi.h>
//#pragma comment(lib, "dxgi")
#include <dxgi1_2.h>
using namespace std;
int main()
{
cout << RoInitialize(RO_INIT_SINGLETHREADED);
// intermediate variables for casting
IDXGIOutput* pDisplay_old;
IDXGIFactory1* pFactory;
IDXGIAdapter1* pGPU;
IDXGIOutput1* pDisplay;
IDXGIOutputDuplication* pCapture;
DXGI_OUTDUPL_DESC captureDesc;
// create factory
if (CreateDXGIFactory1(__uuidof(IDXGIFactory1), (void**)&pFactory) != S_OK) return 1;
// get GPU
if (pFactory -> EnumAdapters1(0, &pGPU) != S_OK) return 1;
// get display
if (pGPU -> EnumOutputs(0, &pDisplay_old) != S_OK) return 1;
pDisplay = (IDXGIOutput1*)pDisplay_old;
DXGI_OUTDUPL_FRAME_INFO frameInfo;
IDXGIResource* pFrame;
HRESULT captureResult;
do
{
// create capture
// cout << pDisplay -> DuplicateOutput(pGPU, &pCapture);
//return 0;
if (pDisplay -> DuplicateOutput(pGPU, &pCapture) != S_OK) return 1;
pCapture -> GetDesc(&captureDesc);
cout << captureDesc.ModeDesc.Width << ' ' << captureDesc.ModeDesc.Height;
do
{
captureResult = pCapture -> AcquireNextFrame(2000, &frameInfo, &pFrame);
if (captureResult == S_OK)
{
cout << "HI";
captureResult = pCapture -> ReleaseFrame();
}
else if (captureResult == DXGI_ERROR_ACCESS_LOST) break;
else return 1;
}
while (true);
}
while (true);
}
I'm using visual studio 2022 witn only "desktop development with c++" enabled, on windows 11 insider build: 22623.1037 ni_release on a regular home PC with display, beyboard, mouse, etc
The code worked fine until DuplicateOutput(), when it complained E_NOINTERFACE. I'm certain there is an interface since index 0 for EnumAdapters1 and EnumOutputs are where the desktop is displayed, and I obviously have a display attached with the desktop. According to this guy https://devblogs.microsoft.com/oldnewthing/20041213-00/?p=37043, I need marshalling and apartments or something, so after more research, I tried RoInitialize() with both RO_INIT_SINGLETHREADED and RO_INIT_MULTITHREADED. Now, DuplicateOutput throws this exception
It seems to happen within the library itself, which makes me think that it's either not my fault or I really messed something up, probably the latter.
I'm really confused now, and would like some assistance, thanks!
EDIT: I replaced "pDisplay = (IDXGIOutput1*)pDisplay_old;" with "pDisplay_old -> QueryInterface(&pDisplay);", and I'm back to E_NOINTERFACE, but I think I'm on the right track, how do I fix this error?
EDIT2: I looked at a related question AcquireNextFrame not working (Desktop Duplication API & D3D11), and followed the answer to add D3D11CreateDevice to my code:
#include <iostream>
#pragma comment(lib, "dxgi")
#pragma comment(lib, "d3d11")
#include <d3d11.h>
#include <dxgi1_2.h>
using namespace std;
int main()
{
// intermediate variables for casting
IDXGIOutput* pDisplay_old;
IDXGIFactory1* pFactory;
IDXGIAdapter* pGPU;
ID3D11Device* pD3DDevice;
IDXGIDevice* pDevice;
IDXGIOutput1* pDisplay;
IDXGIOutputDuplication* pCapture;
DXGI_OUTDUPL_DESC captureDesc;
// create DXGI factory
if (CreateDXGIFactory1(__uuidof(IDXGIFactory1), (void**)&pFactory) != S_OK) return 1;
// get GPU adapter
if (pFactory -> EnumAdapters(0, &pGPU) != S_OK) return 2;
// create D3D11 device
D3D_FEATURE_LEVEL D3DFeatures [6]
{
D3D_FEATURE_LEVEL_11_0,
D3D_FEATURE_LEVEL_10_1,
D3D_FEATURE_LEVEL_10_0,
D3D_FEATURE_LEVEL_9_3,
D3D_FEATURE_LEVEL_9_2,
D3D_FEATURE_LEVEL_9_1
};
cout << D3D11CreateDevice(pGPU, D3D_DRIVER_TYPE_HARDWARE, nullptr, 0, D3DFeatures, sizeof(D3DFeatures), D3D11_SDK_VERSION, &pD3DDevice, NULL, NULL); //!= S_OK) return 3;
return 0;
// get DXGI device from that
pD3DDevice -> QueryInterface(&pDevice);
// get display
if (pGPU -> EnumOutputs(0, &pDisplay_old) != S_OK) return 4;
pDisplay_old -> QueryInterface(&pDisplay);
DXGI_OUTDUPL_FRAME_INFO frameInfo;
IDXGIResource* pFrame;
HRESULT captureResult;
do
{
// create capture
cout << pDisplay -> DuplicateOutput(pD3DDevice, &pCapture);
return 0;
if (pDisplay -> DuplicateOutput(pGPU, &pCapture) != S_OK) return 5;
pCapture -> GetDesc(&captureDesc);
cout << captureDesc.ModeDesc.Width << ' ' << captureDesc.ModeDesc.Height;
do
{
captureResult = pCapture -> AcquireNextFrame(2000, &frameInfo, &pFrame);
if (captureResult == S_OK)
{
cout << "HI";
captureResult = pCapture -> ReleaseFrame();
}
else if (captureResult == DXGI_ERROR_ACCESS_LOST) break;
else return 6;
}
while (true);
}
while (true);
D3D11CreateDevice seems like a complex function and for me it keeps complaining invalid_arg. I'm not sure how to fix that
The solution was provided in the comments:
pDisplay = (IDXGIOutput1*)pDisplay_old; is wrong, you must always use QueryInterface to get an interface from another. And you don't need RoInitialize. –
Simon Mourier Jan 1 at 9:11
I replaced it with "pDisplay_old -> QueryInterface(&pDisplay);", and I'm back to E_NOINTERFACE, but I think I'm on the right track, how do I fix this error? –
Tiger Yang Jan 4 at 16:54
I don't get E_NOINTERFACE (you shouldn't) on this QueryInterface call. What is wrong then is DuplicateOutput expects a Direct3D device, not an adapter interface reference. –
Simon Mourier Jan 4 at 17:18
I've worked on it and updated the post above –
Tiger Yang Jan 7 at 1:51
Your code is wrong again, use D3D_DRIVER_TYPE_UNKNOWN if you pass an adapter as 1st arg (or ask for hardware and pass nullptr as 1st arg) and use ARRAYSIZE(D3DFeatures), not sizeof(D3DFeatures) as 6th arg. Use DirectX Debug Layer learn.microsoft.com/en-us/windows/win32/direct3d11/… walbourn.github.io/direct3d-sdk-debug-layer-tricks to ease debugging –
Simon Mourier Jan 7 at 7:36

Windows exception about amdvlk64.dll when trying to create a vulkan instance

I tried using vulkan, but I can't get it to work. When I try running the first sample given (compiled with VS 2019) with the SDK (01-init_instance.cpp) I get this exception when creating the Vulkan instance:
Exception thrown at 0x00007FFFE7EDAD11 (amdvlk64.dll) in game.exe: 0xC0000005:
Access violation reading location 0xFFFFFFFFFFFFFFFF.
I've tried it with app_info.apiVersion set to VK_API_VERSION_1_0 and VK_API_VERSION_1_1. Also tried setting inst_info.pApplicationInfo to NULL but I don't get any change in the behavior.
I am using an amd gpu AMD Radeon (TM) R9 390 Series, driver version is 17.1.1 and there are some other values about vulkan which are Vulkan™ Driver Version 1.5.0 and Vulkan™ API Version 1.0.39 (all picked from the amd driver interface)
And here is the sample:
#include <iostream>
#include <cstdlib>
#include <util_init.hpp>
#define APP_SHORT_NAME "vulkansamples_instance"
int main(int, char *[]) {
VkApplicationInfo app_info = {};
app_info.sType = VK_STRUCTURE_TYPE_APPLICATION_INFO;
app_info.pNext = NULL;
app_info.pApplicationName = APP_SHORT_NAME;
app_info.applicationVersion = 1;
app_info.pEngineName = APP_SHORT_NAME;
app_info.engineVersion = 1;
app_info.apiVersion = VK_API_VERSION_1_0;
VkInstanceCreateInfo inst_info = {};
inst_info.sType = VK_STRUCTURE_TYPE_INSTANCE_CREATE_INFO;
inst_info.pNext = NULL;
inst_info.flags = 0;
inst_info.pApplicationInfo = &app_info;
inst_info.enabledExtensionCount = 0;
inst_info.ppEnabledExtensionNames = NULL;
inst_info.enabledLayerCount = 0;
inst_info.ppEnabledLayerNames = NULL;
VkInstance inst;
VkResult res;
res = vkCreateInstance(&inst_info, NULL, &inst);
if (res == VK_ERROR_INCOMPATIBLE_DRIVER) {
std::cout << "cannot find a compatible Vulkan ICD\n";
exit(-1);
} else if (res) {
std::cout << "unknown error\n";
exit(-1);
}
vkDestroyInstance(inst, NULL);
return 0;
}
Hopefully someone can help as apparently no one on the internet seems to understand why this happens.
Driver version 17.1.1 is very old (IIRC it means Jan 2017). In ideal world it should work, but as you experience there might be some compatibility issues.
Current drivers are at AMD Support site. They offer "recommended", or more current "optional" driver. Never had any problems with "optional", but it may nag to update more often.

OMNeT++: Different results in 'fast' or 'express' mode

Used Versions: OMNeT++ 5.0 with iNET 3.4.0
I created some code, which gives me reliable results in ‘step-by-step’- or ‘animated’ simulation mode. The moment I change to ‘fast’ or ‘express’ mode, it gets buggy. The following simplified example will explain my problems:
void MyMacSlave::handleSelfMessage(cMessage *msg)
{
if (msg == CheckAck) {
std::cout << “CheckAck: “ << msg << std::endl;
}
if (msg == transmissionAnnouncement) {
std::cout << “transmissionAnncouncement: “ << msg << std::endl;
}
if (msg == transmissionEvent) {
std::cout << “transmissionEvent: “ << msg << std::endl;
}
delete msg;
}
There is a function, which is called for handling self-messages. Depending on what self-message I got, I need to run different if queries.
I get this correct output in step-by-step or animated mode:
CheckAck: (omnetpp::cMessage)CheckAck
transmissionAnncouncement: (omnetpp::cMessage)transmissionAnncouncement
transmissionEvent: (omnetpp::cMessage)transmissionEvent
And this is the strange output I get using fast or express mode:
CheckAck: (omnetpp::cMessage)CheckAck
transmissionAnncouncement: (omnetpp::cMessage)transmissionAnncouncement
transmissionAnncouncement: (omnetpp::cMessage)transmissionEvent
transmissionEvent: (omnetpp::cMessage)transmissionEvent
The third output line shows that the self-message is ‘transmissionEvent’, but the ‘if (msg == transmissionAnnouncement)’ is mistakenly considered as true as well.
As shown above I get different simulation results, depending on the simulation mode I am using. What is the reason for the different output? Why is there even a difference?
As Christoph and Rudi mentioned there was something wrong with the memory allocation. When a pointer is de-allocated and a new one is allocated on the same memory, there will be something wrong. The difference regarding the usage of different running modes is just a sign that there are errors to that effect.
In my case it was useful to check for message-kinds like:
if (msg->getKind() == checkAckAckType) {
instead of the method used in the originally question. I defined the message-kinds using simple enums.

"No Devices connected" with PCL1.6 and Primesense Camera (Carmine 1.09)

I am using primesense camera for a project which has device driver indicating Carmine 1.09 (installed from OpenNI folder). When I am running OpenNI2's viewer, you can see the depth data coming through so the camera is definitely connected.
However, when I am running a project using PCL, it just kept throwing an error exception saying "no devices connected". I tried many different version of primesense (i.e. https://github.com/jspricke/openni-sensor-primesense), but still not helping.
Here is where the problem occurs. Wherever there is a pcl:: command, it will try to throw this exception.
try {
if (!pcl::OpenNIGrabber().getDevice())
{
std::cout << "No device is found!" << std::endl;
return;
}
else
{
std::cout << "Device is found!" << std::endl;
pcl::Grabber* grabber = new pcl::OpenNIGrabber();
}
}
catch (const pcl::PCLIOException& ex)
{
std::cout << ex.what() << std::endl;
return;
}
catch(const char* msg)
{
std::cout << msg << std::endl;
return;
}
FYI. I'm currently using Windows8.1 64 bit, but the projects are all running 32 bits, with PCL 1.6 and OpenNI 1.5.4 (I tried the patched version as well).
Does anybody know a solution to this?

Multiple applications write to one console - mixed/messed output

I have the following system architecture (cannot be changed - legacy code): One main application invokes one or more other applications and these applications interact over a IP protocol.
All applications write to one console window. Unfortunately the console output can get messed up (one character from app 1, next char from app 2, next character from app 4 etc.).
All applications write to console via one Logger.dll (provides static logging functions) using cout/cerr.
Is there a way how I can prevent mixed logging messages in this setup?
Thanks in advance.
EDIT code added:
void Logger::Log(const std::string & componentName, const std::string & Text, LogLevel logLevel, bool logToConsole, bool beep)
{
std::ostringstream stream;
switch (logLevel)
{
case LOG_INFO:
if (logToConsole)
{
stream << componentName << ": INFO " << Text;
mx_console.lock(); // this is a static boost::mutex
std::cout << stream.str() << std::endl;
std::cout.flush();
mx_console.unlock();
}
break;
case LOG_STATUS:
stream << componentName << ": STATUS " << Text;
mx_console.lock();
std::cout << stream.str() << std::endl;
std::cout.flush();
mx_console.unlock();
break;
case LOG_WARNING:
stream << componentName << ": WARNING " << Text;
mx_console.lock();
std::cout << stream.str() << std::endl;
std::cout.flush();
mx_console.unlock();
break;
default:;
}
if (beep)
Beep( 500, 50 );
}
Since you have separate logging functionality you can at minimum use some kind of locking (global mutex, etc.) to avoid interspersing messages from different applications too much. To make it more readable and grepable, add some identifying information, like process name or PID. Wrapping your Logger.dll around existing logging library sounds like an option as well.
Alternatively, you could have logging functions just forward messages to your main application and let that to sort out the synchronization and interspersing.
Syslog might be a solution for you as it is intended to handle logs from various places. Syslog is developed for unix, but this answer shows versions for windows.
You can change your logger to log to syslog instead of the console.
I replaced now all the
std::cout << stream.str();
statements with
std::string str = stream.str();
printf(str.c_str());
and now the output isn't messed up character-wise anymore.
But I don't have a good explanation for this behavior, does anybody know why?

Resources