I am working on setting up a basic OpenGL application by dynamically linking the opengl32.dll file pre-packaged with Windows (That part is non-optional). However I am having quite a lot of difficulty getting procedure addresses for the functions related to Vertex Buffer Objects.
My initial investigations have revealed that windows only exposes the OpenGL 1.1 specification at first, and wglGetProcAddress calls need to be used to get any functions more recent than that. So I modified my code to attempt that method as well. I am using glGenBuffers as my example case, and have attempted four different attempts to load it, and all fail. I have also used glGetString to check my version number which is reported as major version 4, so I doubt it lacks VBO support.
How should I be getting the proc addresses for these VBO functions?
A minimized example of the code I'm dealing with is here:
#include <iostream>
#include "windows.h"
using namespace std;
int main()
{
//Load openGL and get necessary functions
HINSTANCE hDLL = LoadLibrary("opengl32.dll");
PROC WINAPI(*winglGetProcAddress)(LPCSTR);
void(*genBuffers)(int, unsigned int*);
if(hDLL)
{
winglGetProcAddress = (PROC WINAPI(*)(LPCSTR))GetProcAddress(hDLL, "wglGetProcAddress");
if(winglGetProcAddress == NULL){cout << "wglGetProcAddress not found!" << endl; return 0;}
genBuffers = (void(*)(int, unsigned int*))GetProcAddress(hDLL, "glGenBuffers");
if(genBuffers == NULL){genBuffers = (void(*)(int, unsigned int*))winglGetProcAddress("glGenBuffers");}
}
else
{cout << "This application requires Open GL support." << endl; return 0;}
//glGenBuffers not supported, fallback to glGenBuffersARB
if(genBuffers == NULL)
{
genBuffers = (void(*)(int, unsigned int*))GetProcAddress(hDLL, "glGenBuffersARB");
if(genBuffers == NULL){genBuffers = (void(*)(int, unsigned int*))winglGetProcAddress("glGenBuffersARB");}
if(genBuffers == NULL)
{cout << "Could not locate glGenBuffers or glGenBuffersARB in opengl32.dll." << endl; return 0;}
}
//get a Vertex Buffer Object
unsigned int a[1];
genBuffers(1, a);
//cleanup
if(!FreeLibrary(hDLL))
{cout << "Failed to free the opengl32.dll library." << endl;}
return 0;
}
When run, it loads the library and get's the wglGetProcAddress correctly, but then outputs the "Could not locate glGenBuffers or glGenBuffersARB in opengl32.dll." error, indicating it failed to get either "glGenBuffers" or "glGenBuffersARB" using either "GetProcAddress" or "wglGetProcAddress".
Alternatively, if this does mean I do not have VBO support, will a driver update help, or is it even possible to get it supported? I'd really rather not use deprecated immediate mode calls.
I am running this in Code::Blocks, on Windows XP, Intel Core i5, with an NVIDIA GeForce GTX 460.
Related
I'm on Windows 7 and I'm trying to change the color balance from code. Specifically, I'm trying to change these sliders that show on the color calibration wizard.
I'm assuming that the correct functions are SetMonitorRedGreenOrBlueGain and SetMonitorRedGreenOrBlueDrive.
Here is my minimal working example:
#pragma comment(lib, "dxva2.lib")
#include <windows.h>
#include <lowlevelmonitorconfigurationapi.h>
#include <physicalmonitorenumerationapi.h>
#include <iostream>
#include <stdio.h>
#include <stdlib.h>
#include <string>
using namespace std;
int main()
{
HWND hWnd = GetDesktopWindow();
HMONITOR hMonitor = MonitorFromWindow(hWnd, MONITOR_DEFAULTTOPRIMARY);
cout << "Monitor: " << hMonitor << endl;
DWORD cPhysicalMonitors;
BOOL bSuccess = GetNumberOfPhysicalMonitorsFromHMONITOR(hMonitor, &cPhysicalMonitors);
cout << "GetNumber: " << bSuccess << ", number of physical monitors: " << cPhysicalMonitors << endl;
LPPHYSICAL_MONITOR pPhysicalMonitors = (LPPHYSICAL_MONITOR)malloc(cPhysicalMonitors * sizeof(PHYSICAL_MONITOR));
bSuccess = GetPhysicalMonitorsFromHMONITOR(hMonitor, cPhysicalMonitors, pPhysicalMonitors);
cout << "GetPhysicalMonitor: " << bSuccess << endl
<< "Handle: " << pPhysicalMonitors[0].hPhysicalMonitor << endl
<< "Description: ";
wcout << (WCHAR*)(pPhysicalMonitors[0].szPhysicalMonitorDescription);
DestroyPhysicalMonitors(cPhysicalMonitors, pPhysicalMonitors);
free(pPhysicalMonitors);
}
The output is:
Monitor: 00010001
GetNumber: 1, number of physical monitors: 1
GetPhysicalMonitor: 1
Handle: 00000000
Description: Generic PnP Monitor
All the functions for brightness and color gains require HANDLE hPhysicalMonitor which is always null for my display (laptop screen). But, I know it must be possible to change the color balance since the color calibration window can do that.
What am I doing wrong?
EDIT 1:
As mentioned in the comments, it seems that the hPhysicalMonitor is correct. My issue is that calling functions like GetMonitorBrightness returns FALSE with an error code of ERROR_GRAPHICS_I2C_ERROR_TRANSMITTING_DATA (An error occurred while transmitting data to the device on the I2C bus.)
EDIT 2:
My monitor does support setting brightness and has 11 levels. Windows itself and some programs are able to adjust the brightness (the back-light of the monitor directly). So the issue must be software related.
My issue is that calling functions like GetMonitorBrightness returns
FALSE with an error code of ERROR_GRAPHICS_I2C_ERROR_TRANSMITTING_DATA
(An error occurred while transmitting data to the device on the I2C
bus.)
GetMonitorBrightness works for me. I tested it on i.e desktop. Some similar cases point out that GetMonitorBrightness does not work on some laptops.
GetMonitorCapabilities returns false
How to control system brightness using windows api ?
I think your laptop does not support DDC/CI.
GetMonitorCapabilities: The function fails if the monitor does not
support DDC/CI.
You may first check if your laptop supports DDC/Cl.
I am trying to make some Linux-based code run on macOS. It is the POSIX OSAL layer for NASA Core Flight System as found here: https://github.com/nasa/osal.
I am observing that the code uses POSIX conditions and in particular, there is a call like the following:
if (pthread_cond_destroy(&(sem->cv)) != 0) {
printf("pthread_cond_destroy %d %s\n", errno, strerror(errno)); // my addition
...
}
On macOS, the tests related to this code provided in the OSAL repository always fail because the call to pthread_cond_destroy always results in:
pthread_cond_destroy 78 Function not implemented
I have found an example in the Apple documentation which shows an example of Using Conditions (Threading Programming Guide / Synchronization / Using Conditions) and in that example there is no call to pthread_cond_destroy but I cannot make any conclusions on whether that call should be there or not because the example is simplified.
This is how the header looks like on my machine:
__API_AVAILABLE(macos(10.4), ios(2.0))
int pthread_cond_destroy(pthread_cond_t *);
I am wondering if pthread_cond_* functionality is simply missing on macOS and I have to implement a replacement for it or there is some way to make it work.
EDIT: The minimal example is working fine for me. The problem should be somewhere around the problematic code. What I still don't understand is why I am getting ENOSYS/78 error code, for one thing it is not mentioned on the man page man/3/pthread_cond_destroy:
#include <iostream>
#include <pthread.h>
int main() {
pthread_cond_t condition;
pthread_cond_init(&condition, NULL);
int result = pthread_cond_destroy(&condition);
assert(result == 0);
assert(errno == 0);
std::cout << "Hello, World!" << std::endl;
return 0;
}
I'm trying to retrieve the complete list of the user's preferred languages from a C++/Qt application, as configured in the "Region & language" page in the user's preferences:
For that, I am trying with the WinAPI function GetUserPreferredUILanguages(), on an up-to-date Windows 10 Pro system.
However, the function always only returns the first entry (the main Windows display language), and "en-US". If English is configured as the main language, then only "en-US" is returned. E.g., if I have (German, French, English) configured, ["de-de", "en-US"] is returned, French is omitted. If I add more languages to the list, they are omitted as well.
I also looked at User Interface Language Management, but to no avail. GetSystemPreferredUILanguages() for example only returns "en-US". GetUILanguageFallbackList() returns ["de-de", "de", "en-US", "en"].
The code I use:
// calling GetUserPreferredUILanguages() twice, once to get number of
// languages and required buffer size, then to get the actual data
ULONG numberOfLanguages = 0;
DWORD bufferLength = 0;
const auto result1 = GetUserPreferredUILanguages(MUI_LANGUAGE_NAME,
&numberOfLanguages,
nullptr,
&bufferLength);
// result1 is true, numberOfLanguages=2
QVector<wchar_t> languagesBuffer(static_cast<int>(bufferLength));
const auto result2 = GetUserPreferredUILanguages(MUI_LANGUAGE_NAME,
&numberOfLanguages,
languagesBuffer.data(),
&bufferLength);
// result2 is true, languageBuffer contains "de-de", "en-US"
Is this not the right function to use, or am I misunderstanding something about the language configuration in Windows 10? How can I get the complete list of preferred languages? I see UWP API that might do the job, but if possible, I'd like to use C API, as it integrated more easily with the C++ codebase at hand. (unmanaged C++, that is)
GlobalizationPreferences.Languages is usable from unmanaged C++ because GlobalizationPreferences has DualApiPartitionAttribute.
Here is a C++/WinRT example of using GlobalizationPreferences.Languages:
#pragma once
#include <winrt/Windows.Foundation.Collections.h>
#include <winrt/Windows.System.UserProfile.h>
#include <iostream>
#pragma comment(lib, "windowsapp")
using namespace winrt;
using namespace Windows::Foundation;
using namespace Windows::System::UserProfile;
int main()
{
winrt::init_apartment();
for (const auto& lang : GlobalizationPreferences::Languages()) {
std::wcout << lang.c_str() << std::endl;
}
}
And a WRL example for those who cannot migrate to C++ 17:
#include <roapi.h>
#include <wrl.h>
#include <Windows.System.UserProfile.h>
#include <iostream>
#include <stdint.h>
#pragma comment(lib, "runtimeobject.lib")
using namespace Microsoft::WRL;
using namespace Microsoft::WRL::Wrappers;
using namespace ABI::Windows::Foundation::Collections;
using namespace ABI::Windows::System::UserProfile;
int main()
{
RoInitializeWrapper initialize(RO_INIT_MULTITHREADED);
if (FAILED(initialize)) {
std::cerr << "RoInitialize failed" << std::endl;
return 1;
}
ComPtr<IGlobalizationPreferencesStatics> gps;
HRESULT hr = RoGetActivationFactory(
HStringReference(
RuntimeClass_Windows_System_UserProfile_GlobalizationPreferences)
.Get(),
IID_PPV_ARGS(&gps));
if (FAILED(hr)) {
std::cerr << "RoGetActivationFactory failed" << std::endl;
return 1;
}
ComPtr<IVectorView<HSTRING>> langs;
hr = gps->get_Languages(&langs);
if (FAILED(hr)) {
std::cerr << "Could not get Languages" << std::endl;
return 1;
}
uint32_t size;
hr = langs->get_Size(&size);
if (FAILED(hr)) {
std::cerr << "Could not get Size" << std::endl;
return 1;
}
for (uint32_t i = 0; i < size; ++i) {
HString lang;
hr = langs->GetAt(i, lang.GetAddressOf());
if (FAILED(hr)) {
std::cerr << "Could not get Languages[" << i << "]" << std::endl;
continue;
}
std::wcout << lang.GetRawBuffer(nullptr) << std::endl;
}
}
I found out that language list returned by GetUserPreferredUILanguages() matters with your "Windows UI language" setting, and nothing to do with "Input method list order".
For example, in following screenshot from Win10.21H2,
I can see GetUserPreferredUILanguages() return a list of three langtags:
fr-CA\0fr-FR\0en-US\0\0
In summary, for GetUserPreferredUILanguages() and GetUILanguageFallbackList() their returned langtag list is determined solely by current user's "Windows display language" selection. It is a user-wide single-selection setting. And, for a specific display-language selection, the list-items within and the order of the list-items are hard-coded by Windows itself. Yes, it is even unrelated to what "input methods(IME)" you have added to the control panel -- for example, you add "fr-CA" but not "fr-FR", and the fallback list will still be fr-CA\0fr-FR\0en-US\0\0.
The difference of the two APIs, according to my experiment, is that GetUILanguageFallbackList() returns neutral langtags("fr", "en" etc) as well, so it produces a superset of GetUserPreferredUILanguages().
Whenever I call cudaMemPrefetchAsync() it returns the error code cudaErrorInvalidDevice. I am sure that I pass right device id (I have only one CUDA-capable GPU in my laptop under id == 0).
I believe that code sample posted below is error-free, but at line 52 (call to cudaMemPrefetchAsync()) I keep getting this error.
I tried:
Clean driver installation. (Latest version)
I check Google for an answer, but I could not find any. (I managed only to find this)
(I haven't idea for anything else)
System Spec:
OS: Microsoft windows 8.1 x64 home.
IDE: Visual studio 2015
CUDA toolkit: 8.0.61
NVIDIA GPU: GeForce GTX 960M
NVIDIA GPU driver: ver 381.65 (latest)
Compute Capability: 5.0 (Maxwell)
Unified Memory support: is supported.
Intel integrated gpu: Intel HD graphics 4600
Code Sample:
/////////////////////////////////////////////////////////////////////////////////////////////////////////
// TEST AREA:
// -- INCLUDE:
/////////////////////////////////////////////////////////////////////////////////////////////////////////
// Cuda Libs: ( Device Side ):
#include <cuda_runtime.h>
#include <device_launch_parameters.h>
// Std C++ Libs:
#include <iostream>
#include <iomanip>
///////////
/////////////////////////////////////////////////////////////////////////////////////////////////////////
// TEST AREA:
// -- NAMESPACE:
/////////////////////////////////////////////////////////////////////////////////////////////////////////
using namespace std;
///////////
/////////////////////////////////////////////////////////////////////////////////////////////////////////
// TEST AREA:
// -- START POINT:
/////////////////////////////////////////////////////////////////////////////////////////////////////////
int main() {
// Set cuda Device:
if (cudaSetDevice(0) != cudaSuccess)
cout << "ERROR: cudaSetDevice(***)" << endl;
// Array:
unsigned int size = 1000;
double * d_ptr = nullptr;
// Allocate unified memory:
if (cudaMallocManaged(&d_ptr, size * sizeof(double), cudaMemAttachGlobal) != cudaSuccess)
cout << "ERROR: cudaMallocManaged(***)" << endl;
if (cudaDeviceSynchronize() != cudaSuccess)
cout << "ERROR: cudaDeviceSynchronize(***)" << endl;
// Prefetch:
if(cudaMemPrefetchAsync(d_ptr, size * sizeof(double), 0) != cudaSuccess)
cout << "ERROR: cudaMemPrefetchAsync(***)" << endl;
// Exit:
getchar();
}
///////////
Thank to talonmies I have realized that my GPU does not support prefetch feature. In order to be able to use cudaMemPrefetchAsync(***) gpu must have non-zero value in (cudaDeviceProp)deviceProp.concurrentManagedAccess.
See more here.
I'm working on a means of installing a driver. Because of the multiple platforms on which this must work, I'm shelling-out to both devcon and dpinst to do the work of driver install/update/removal when needed. While testing, I'm having problems with the shelling out to devcon. To isolate, I wrote a small app to do what devcon does in update see here, using the devcon source from the WinDDK for reference. I'm having some problems with UpdateDriverForPlugAndPlayDevices() from Setup API (actually part of Newdev.dll) see here. The source code is here:
#include <iostream>
#include <Windows.h>
#include <newdev.h>
int main(int argc, char** argv) {
// Go through the same steps as does dev con for this update crap
char infFile[MAX_PATH];
if(3 > argc) {
std::cerr << "an INF and HW ID must be specified" << std::endl;
return 1;
}
DWORD result(GetFullPathName(argv[1], MAX_PATH, infFile, NULL));
if((result >= MAX_PATH) || (0 == result)) {
std::cerr << "path is too long for buffer" << std::endl;
return 1;
}
if(GetFileAttributes(infFile) == -1) {
std::cerr << "file doesn't exist" << std::endl;
return 1;
}
BOOL reboot(FALSE);
if(!UpdateDriverForPlugAndPlayDevices(NULL, argv[2], infFile, INSTALLFLAG_FORCE, &reboot)) {
std::cerr << "Failed to install the driver. Code: "
<< GetLastError()
<< std::endl;
return 2;
}
if(reboot) {
std::cout << "A reboot is needed to complete driver install"
<< std::endl;
}
return 0;
}
The program fails when UpdateDriverForPlugAndPlayDevices() returns false. This then prints the error code, returned by GetLastError(), so I'd know what went wrong. The error code returned: 259. According to this resource says this is ERROR_NO_MORE_ITEMS. According to the link for UpdateDriverForPlugAndPlayDevices(), this function returns this error code when, "The function found a match for the HardwareId value, but the specified driver was not a better match than the current driver and the caller did not specify the INSTALLFLAG_FORCE flag." You'll notice from my code that I did specify this flag.
I do not know where to go from here. Can someone please identify from this code what it is I'm missing? This just has the "feel" of something simple, but I'm totally missing it.
Thank you,
Andy
The problem appeared to be not with the code but with the INF file. Interesting that the documentation for the function said that using that flag will force the install but didn't when the INF file didn't "list" any device classes in the models section. This is how I was able to install eventually. I added the correct device class to the models section in the INF.
EDIT Sep. 17, 2020
It was requested by someone just today (of the edit) to add an example from the INF. It's been 8 years since I had this issue and I no longer work for this team. However, as best as I can recall, and drawing heavily upon the docs for INF Models Section and INF Manufacturers Section, I hope this helps.
Essentially, the class is specified by the Models Section and the model is specified by the Manufacturer Section.
[Manufacturer]
%MfgName%=Standard,NTamd64
[Standard.NTamd64]
%DeviceString%=<class path or GUID>\<device>
[Strings]
MfgName=ACME
DeviceString="Device Type"