I'm trying to open a fullscreen window using SDL2. I've thoroughly looked at the documentation on Display and window management ( https://wiki.libsdl.org/CategoryVideo )... however I don't understand what the best practice would be to get the display resolution I am actually working on.
I have the following sample code:
SDL_DisplayMode mMode;
SDL_Rect mRect;
int ret0 = SDL_GetDisplayBounds(0, &mRect);
std::cout << "bounds w and h are: " << mRect.w << " x " << mRect.h << std::endl;
int ret2 = SDL_GetCurrentDisplayMode(0, &mMode);
std::cout << "current display res w and h are: " << mMode.w << " x " << mMode.h << std::endl;
int ret3 = SDL_GetDisplayMode(0, 0, &mMode);
std::cout << "display mode res w and h are: " << mMode.w << " x " << mMode.h << std::endl;
I am working on a single display that has a resolution of 1920x1080. However, the printed results are:
program output
It seems that SDL_GetDisplayMode() is the only function that displays the correct resolution, so I'd be inclined to use that one. However, I've read that when it comes to SDL_GetDisplayMode(), display modes are sorted according to a certain priority, so that calling it with a 0 returns the largest supported resolution for the display, which is not necessarily the actual resolution (see also: SDL desktop resolution detection in Linux ).
My question is: what is the best practice to obtain the correct resolution?
Related
I was grading some exercises and at a specific program although the algorithm seemed correct it would be too slow (and I mean too slow). The program was accessing a map using map::at (introduced in C++11). With a minimum change of replacing at with find (and fixing the syntax) the same program would be really fast (compared to the original version).
Looking at cplusplus.com both methods claim to have the same complexity and I couldn't see why one would be different from the other (other than API reason, not throwing an exception, etc).
Then I saw that the description in the section about data races is different. But I don't fully understand the implications. Is my assumption that map::at is thread safe (whereas map::find is not) and thus incurring some runtime penalties correct?
http://www.cplusplus.com/reference/map/map/at/
http://www.cplusplus.com/reference/map/map/find/
Edit
Both are in a loop called 10.000.000 times. No optimization flags. Just g++ foo.cpp. Here is the diff (arrayX are vectors, m is a map)
< auto t = m.find(array1.at(i));
< auto t2 = t->second.find(array2.at(i));
< y = t->second.size();
< cout << array.at(i) << "[" << t2->second << " of " << y << "]" << endl;
---
> auto t = m.at(array1.at(i));
> x = t.at(array2.at(i));
> y = m.at(array1.at(i)).size();
> cout << array.at(i) << "[" << x << " of " << y << "]" << endl;
The performance difference you are observing can be attributed to object copying.
auto t = m.at(array1.at(i));
According to template argument deduction rules (same are applied for the auto specifier), in the above statement, t is deduced to mapped_type, which triggers an object copy.
You need to define t as auto& t for it to be deduced to mapped_type&.
Related conversation: `auto` specifier type deduction for references
I am aware that drivers are not required to support variable line widths, the manner of querying driver support is shown with this answer:
GLfloat lineWidthRange[2] = {0.0f, 0.0f};
glGetFloatv(GL_ALIASED_LINE_WIDTH_RANGE, lineWidthRange);
std::cout << "line width range: " << lineWidthRange[0] << ", " << lineWidthRange[1] << std::endl;
Querying the current line width:
GLfloat currLineWidth[1] = {0.0f};
glGetFloatv(GL_LINE_WIDTH, currLineWidth);
std::cout << "line width before: " << currLineWidth[0] << std::endl;
So right now, it should and does report 1 for the current line width. I put a sleep in just to make absolutely sure that I wasn't reading back the value before it was actually set, so lets set it and read back:
glLineWidth(8.0f);
{
// local testing, #include <thread> and <chrono>
std::this_thread::sleep_for(std::chrono::milliseconds(2000));
}
glGetFloatv(GL_LINE_WIDTH, currLineWidth);
std::cout << "line width after: " << currLineWidth[0] << std::endl;
// since this is in my draw loop I'm just resetting again, removing
// doesn't change the impact
glLineWidth(1.0f);
My application is not setting glEnable(GL_LINE_SMOOTH) or anything, but it seems the line width never actually changes. I'm confused because I get the following print out:
line width range: 1, 10
line width before: 1
line width after: 1
It should be that if line width range is reporting a range [1, 10], that something like glLineWidth(8.0f) would be all I need to actually use it...right?
I don't think this is relevant, but I'm using an OpenGL 3.3 core profile, with an underlying glDrawElements(GL_LINES, count, GL_UNSIGNED_INT, offset) in conjunction with a shader program. I omitted that because it adds a lot of code to the question.
Update: Hmmm. It seems that maybe the driver has disabled it internally or something.
glGetFloatv(GL_SMOOTH_LINE_WIDTH_RANGE, lineWidthRange);
std::cout << "smooth line width range: " << lineWidthRange[0] << ", " << lineWidthRange[1] << std::endl;
On this box it returns [1, 1], so even if I glDisable(GL_LINE_SMOOTH), it seems I still won't be able to use it.
I believe you need to use the compatibility profile for line width to work.
It is not supported in core. If core is used you can do your own thick lines
by using a screen aligned quads system.
hope this helps.
We're using the DJI Assistant 2 as the similator, a linux machine as the onboard computer, and we're not getting the correct latitude and longitude out from PositionData.
PositionData p = api->getBroadcastData().pos;
std::cout << "LAT:" << std::fixed << std::setprecision(8) << p.latitude << endl;
std::cout << "LONG:" << std::fixed << std::setprecision(8) << p.longitude << endl;
I've set the simulator to start at lat=1.0 and long=2.0. The position data I get back from the above code is:
LAT:0.01745329
LONG:0.03490660
Height/altitude seem to come out correctly, just the latitude/longitude seem incorrect.
I've tried a range of lat/long settings in the simulator, but it still doesn't seem to be accurate. The lat/long always seems to be < 1.
Am I missing something incredibly obvious?
TIA!
the values you see in BroadcastData are in radians; you'll need to convert to degrees to see the values you are setting.
I am very new to C++ programming and have stumbled across a behaviour that confuses me and makes my coding harder. I have searched for answer a bit and could not find anything - I have also scrolled through C++ reference pages and that did not help either (please don't crucify me if the answer is in there - the page isn't role model for explaining things). Maybe I am missing something really obvious.
Could someone explain, the following behaviour of std::unordered_map ?
std::unordered_map<std::string, std::string> test_map;
test_map["test_key_1"] = "test_value_1";
test_map["test_key_2"] = "test_value_2";
std::cout << "'test_key_1' value: " << test_map["test_key_1"] << std::endl; // This returns "test_value_1"
std::cout << "test_map size before erase: " << test_map.size() << std::endl; // This returns 2
test_map.erase("test_key_1");
std::cout << "test_map size after erase: " << test_map.size() << std::endl; // This returns 1
std::cout << "'test_key_1' value after erase: " << test_map["test_key_1"] << std::endl; // This returns empty string
std::cout << "'non_existing_key' value: " << test_map["non_existing_key"] << std::endl; // This returns empty string
test_map.rehash(test_map.size()); // I am doing this because vague hints from internet, code behaves
// same way without it.
for (std::unordered_map<std::string, std::string>::iterator it = test_map.begin();
it != test_map.end(); ++it)
{
std::cout << "Key: " << it->first << std::endl;
}
// Above loop return both 'test_key_1' and 'test_key_2'.
// WHY!?
Why iterator is returning items that were already erased ? How can I make iterator return only items that are present in map ?
I will be grateful for any help, as I am really lost.
You are using operator[] to access previously erased elements which
Returns a reference to the value that is mapped to a key equivalent to key, performing an insertion if such key does not already exist.
If you need just to search for given key, use find method that returns map.end() if element was not found.
I'm trying to write a custom 'driver' for a keyboard (HID, if it matters), under Windows 7. The final goal is having two keyboards connected to the computer, but mapping all of the keys of one of them to special (custom) functions.
My idea is to use libusb-win32 as the 2nd keyboard's driver, and write a small program to read data from the keyboard and act upon it. I've successfully installed the driver, and the device is recognized from my program, but all transfers timeout, even though I'm pressing keys.
here's my code:
struct usb_bus *busses;
struct usb_device *dev;
char buf[1024];
usb_init();
usb_find_busses();
usb_find_devices();
busses = usb_get_busses();
dev = busses->devices;
cout << dev->descriptor.idVendor << '\n' << dev->descriptor.idProduct << '\n';
usb_dev_handle *h = usb_open(dev);
cout << usb_set_configuration(h, 1) << '\n';
cout << usb_claim_interface(h, 0) << '\n';
cout << usb_interrupt_read(h, 129, buf, 1024, 5000) << '\n';
cout << usb_strerror();
cout << usb_release_interface(h, 0) << '\n';
cout << usb_close(h) << '\n';
and it returns:
1133
49941
0
0
-116
libusb0-dll:err [_usb_reap_async] timeout error
0
0
(I'm pressing lots of keys in those 5 seconds)
There's only one bus, one device, one configuration, one interface and one endpoint.
The endpoint has bmAttributes = 3 which implies I should use interrupt transfers (right?)
so why am I not getting anything? Am I misusing libusb? Do you know a way to do this without libusb?
It's pretty simple actually - when reading from the USB device, you must read exactly the right amount of bytes. You know what that amount is by reading wMaxPacketSize.
Apparently a read request with any other size simply results in a timeout.