I am aware that drivers are not required to support variable line widths, the manner of querying driver support is shown with this answer:
GLfloat lineWidthRange[2] = {0.0f, 0.0f};
glGetFloatv(GL_ALIASED_LINE_WIDTH_RANGE, lineWidthRange);
std::cout << "line width range: " << lineWidthRange[0] << ", " << lineWidthRange[1] << std::endl;
Querying the current line width:
GLfloat currLineWidth[1] = {0.0f};
glGetFloatv(GL_LINE_WIDTH, currLineWidth);
std::cout << "line width before: " << currLineWidth[0] << std::endl;
So right now, it should and does report 1 for the current line width. I put a sleep in just to make absolutely sure that I wasn't reading back the value before it was actually set, so lets set it and read back:
glLineWidth(8.0f);
{
// local testing, #include <thread> and <chrono>
std::this_thread::sleep_for(std::chrono::milliseconds(2000));
}
glGetFloatv(GL_LINE_WIDTH, currLineWidth);
std::cout << "line width after: " << currLineWidth[0] << std::endl;
// since this is in my draw loop I'm just resetting again, removing
// doesn't change the impact
glLineWidth(1.0f);
My application is not setting glEnable(GL_LINE_SMOOTH) or anything, but it seems the line width never actually changes. I'm confused because I get the following print out:
line width range: 1, 10
line width before: 1
line width after: 1
It should be that if line width range is reporting a range [1, 10], that something like glLineWidth(8.0f) would be all I need to actually use it...right?
I don't think this is relevant, but I'm using an OpenGL 3.3 core profile, with an underlying glDrawElements(GL_LINES, count, GL_UNSIGNED_INT, offset) in conjunction with a shader program. I omitted that because it adds a lot of code to the question.
Update: Hmmm. It seems that maybe the driver has disabled it internally or something.
glGetFloatv(GL_SMOOTH_LINE_WIDTH_RANGE, lineWidthRange);
std::cout << "smooth line width range: " << lineWidthRange[0] << ", " << lineWidthRange[1] << std::endl;
On this box it returns [1, 1], so even if I glDisable(GL_LINE_SMOOTH), it seems I still won't be able to use it.
I believe you need to use the compatibility profile for line width to work.
It is not supported in core. If core is used you can do your own thick lines
by using a screen aligned quads system.
hope this helps.
Related
I am new to C++ and programming.
I am using the CLion editor. I created a simple program for a homework assignment and I can't figure out why my out put indents every line after the second line. I have searched online and here, but most indent questions ask how to indent, not how to make your output stop indenting when I never asked it to.
Thanks in advance for any help. I appreciate it.
I tried using code to left align, but that didn't work. I also tried creating a new project and retyping it in. I still got the same result.
I also tried adding a new line break--that prevents the indent, but then I have a blank line.
I think it may be a setting---but I have no idea which setting to change.
`
#include <iostream>
#include<iomanip>
using namespace std;
int main()
{
int numberPennies, numberNickels, numberDimes, numberQuarters,
totalCents;
cout << "Please Enter Number of Coins: " << endl;
cout << "# of Quarters: ";
cin >> numberQuarters;
cout << "# of Dimes: ";
cin >> numberDimes;
cout << "# of Nickels: ";
cin >> numberNickels;
cout << "# of Pennies: ";
cin >> numberPennies;
totalCents = int((numberQuarters * 25) + (numberDimes * 10) +
(numberNickels * 5) + (numberPennies));
cout << "The total is " << int(totalCents / 100) << " dollars and "
<< int(totalCents % 100) << " cents ";
return 0;}`
The result should be left aligned, instead my output appears like this:
`Please Enter Number of Coins:
# of Quarters:13
# of Dimes:4
# of Nickels:11
# of Pennies:17
The total is 4 dollars and 37 cents
Process finished with exit code 0`
It seems like you're doing everything right, I ran this code in Visual Studio 2019 and didn't have the issue you're describing. I think that the indentation you're seeing might be a feature of your IDE.
Try running the .exe you're generating instead of using the built in console in your IDE.
We're using the DJI Assistant 2 as the similator, a linux machine as the onboard computer, and we're not getting the correct latitude and longitude out from PositionData.
PositionData p = api->getBroadcastData().pos;
std::cout << "LAT:" << std::fixed << std::setprecision(8) << p.latitude << endl;
std::cout << "LONG:" << std::fixed << std::setprecision(8) << p.longitude << endl;
I've set the simulator to start at lat=1.0 and long=2.0. The position data I get back from the above code is:
LAT:0.01745329
LONG:0.03490660
Height/altitude seem to come out correctly, just the latitude/longitude seem incorrect.
I've tried a range of lat/long settings in the simulator, but it still doesn't seem to be accurate. The lat/long always seems to be < 1.
Am I missing something incredibly obvious?
TIA!
the values you see in BroadcastData are in radians; you'll need to convert to degrees to see the values you are setting.
I'm trying to open a fullscreen window using SDL2. I've thoroughly looked at the documentation on Display and window management ( https://wiki.libsdl.org/CategoryVideo )... however I don't understand what the best practice would be to get the display resolution I am actually working on.
I have the following sample code:
SDL_DisplayMode mMode;
SDL_Rect mRect;
int ret0 = SDL_GetDisplayBounds(0, &mRect);
std::cout << "bounds w and h are: " << mRect.w << " x " << mRect.h << std::endl;
int ret2 = SDL_GetCurrentDisplayMode(0, &mMode);
std::cout << "current display res w and h are: " << mMode.w << " x " << mMode.h << std::endl;
int ret3 = SDL_GetDisplayMode(0, 0, &mMode);
std::cout << "display mode res w and h are: " << mMode.w << " x " << mMode.h << std::endl;
I am working on a single display that has a resolution of 1920x1080. However, the printed results are:
program output
It seems that SDL_GetDisplayMode() is the only function that displays the correct resolution, so I'd be inclined to use that one. However, I've read that when it comes to SDL_GetDisplayMode(), display modes are sorted according to a certain priority, so that calling it with a 0 returns the largest supported resolution for the display, which is not necessarily the actual resolution (see also: SDL desktop resolution detection in Linux ).
My question is: what is the best practice to obtain the correct resolution?
Intro: I am trying to write a program which connects to a FLIR AX5(GigE Vision) camera and then save images after regular intervals to a pre-specified location on my PC. These images must be 14bit which contains the temperature information. Later I need to process these images using openCV to get some meaningful results from obtained temperature data.
Current Position: I can save image at regular interval but the image which I am getting doesn't contain 14 bit data but 8 bit data instead. This even after I change the PixelFormat to 14 bit, CMOS and LVDT bit depths to 14 bit. I checked the resulting .bin file in matlab and found that the max pixel value is 255 which means image is being stored in 8 bit format. I am using the sample code provided by eBus SDK to do this job. In this code I have made some changes as per my requirement.
Please help in saving the image in the raw format from which I can read the temperature data.
P.S. Relevant code is here.
// If the buffer contains an image, display width and height.
uint32_t lWidth = 0, lHeight = 0;
lType = lBuffer->GetPayloadType();
cout << fixed << setprecision( 1 );
cout << lDoodle[ lDoodleIndex ];
cout << " BlockID: " << uppercase << hex << setfill( '0' ) << setw( 16 ) << lBuffer->GetBlockID();
if (lType == PvPayloadTypeImage)
{
// Get image specific buffer interface.
PvImage *lImage = lBuffer->GetImage();
// Read width, height.
lWidth = lImage->GetWidth();
lHeight = lImage->GetHeight();
cout << " W: " << dec << lWidth << " H: " << lHeight;
lBuffer->GetImage()->Alloc(lWidth, lHeight, lBuffer->GetImage()->GetPixelType());
if (lBuffer->GetBlockID()%50==0) {
char filename[]= IMAGE_SAVE_LOC;
std::string s=std::to_string(lBuffer->GetBlockID());
char const *schar=s.c_str();
strcat(filename, schar);
strcat(filename,".bin");
lBufferWriter.Store(lBuffer,filename);
}
Be sure that the streaming is configured for 14 bits stream.
Before create PvStream you have to set PixelFormat to 14 bits. If you PvDevice object it's called _pvDevice:
_pvDevice->GetParameters()->SetEnumValue("PixelFormat", PvPixelMono14);
_pvDevice->GetParameters()->SetEnumValue("DigitalOutput", 3);
I'm trying to write a custom 'driver' for a keyboard (HID, if it matters), under Windows 7. The final goal is having two keyboards connected to the computer, but mapping all of the keys of one of them to special (custom) functions.
My idea is to use libusb-win32 as the 2nd keyboard's driver, and write a small program to read data from the keyboard and act upon it. I've successfully installed the driver, and the device is recognized from my program, but all transfers timeout, even though I'm pressing keys.
here's my code:
struct usb_bus *busses;
struct usb_device *dev;
char buf[1024];
usb_init();
usb_find_busses();
usb_find_devices();
busses = usb_get_busses();
dev = busses->devices;
cout << dev->descriptor.idVendor << '\n' << dev->descriptor.idProduct << '\n';
usb_dev_handle *h = usb_open(dev);
cout << usb_set_configuration(h, 1) << '\n';
cout << usb_claim_interface(h, 0) << '\n';
cout << usb_interrupt_read(h, 129, buf, 1024, 5000) << '\n';
cout << usb_strerror();
cout << usb_release_interface(h, 0) << '\n';
cout << usb_close(h) << '\n';
and it returns:
1133
49941
0
0
-116
libusb0-dll:err [_usb_reap_async] timeout error
0
0
(I'm pressing lots of keys in those 5 seconds)
There's only one bus, one device, one configuration, one interface and one endpoint.
The endpoint has bmAttributes = 3 which implies I should use interrupt transfers (right?)
so why am I not getting anything? Am I misusing libusb? Do you know a way to do this without libusb?
It's pretty simple actually - when reading from the USB device, you must read exactly the right amount of bytes. You know what that amount is by reading wMaxPacketSize.
Apparently a read request with any other size simply results in a timeout.