Failure to create EGLSurface using the RenderResolutionScale property on Windows - windows

I'm trying to create an EGLSurface in a Windows UWP app. The creation code is in a xaml.cpp file, as shown below.
When I try creating the surface using the optional property EGLRenderResolutionScaleProperty, it fails with an EGL_BAD_ALLOC error. Two alternate approaches work, but I need to try to use the resolution scale option for my app.
void MyClass::CreateRenderSurface()
{
if (mRenderSurface == EGL_NO_SURFACE)
{
// NOTE: in practice, I only have one of the three following implementations in the code;
// all are included together here for ease of comparison.
// 1. This works
mRenderSurface = CreateSurface(mSwapChainPanel, nullptr, nullptr);
// 2. and this works (here I hardwired the size to twice the
// the size of the window I happen to be using, because
// Windows display settings is set at 200%)
Size size;
size.Height = 1448; // hardwired value for testing, in this case window height is 724 pix
size.Width = 1908; // hardwired value for testing, in this case window width is 954 pix
mRenderSurface = CreateSurface(mSwapChainPanel, &size, nullptr);
// 3. but this fails (and this is the one I want to use)
float resolutionScale = 1.0;
mRenderSurface = CreateSurface(mSwapChainPanel, nullptr, &resolutionScale);
}
}
EGLSurface MyClass::CreateSurface(SwapChainPanel^ panel, const Size* renderSurfaceSize, const float* resolutionScale)
{
if (!panel)
{
throw Exception::CreateException(E_INVALIDARG, L"SwapChainPanel parameter is invalid");
}
if (renderSurfaceSize != nullptr && resolutionScale != nullptr)
{
throw Exception::CreateException(E_INVALIDARG, L"A size and a scale can't both be specified");
}
EGL _egl = this->HelperClass->GetEGL();
EGLSurface surface = EGL_NO_SURFACE;
const EGLint surfaceAttributes[] =
{
EGL_ANGLE_SURFACE_RENDER_TO_BACK_BUFFER, EGL_TRUE,
EGL_NONE
};
// Create a PropertySet and initialize with the EGLNativeWindowType.
PropertySet^ surfaceCreationProperties = ref new PropertySet();
surfaceCreationProperties->Insert(ref new String(EGLNativeWindowTypeProperty), panel);
// If a render surface size is specified, add it to the surface creation properties
if (renderSurfaceSize != nullptr)
{
surfaceCreationProperties->Insert(ref new String(EGLRenderSurfaceSizeProperty), PropertyValue::CreateSize(*renderSurfaceSize));
}
// If a resolution scale is specified, add it to the surface creation properties
if (resolutionScale != nullptr)
{
surfaceCreationProperties->Insert(ref new String(EGLRenderResolutionScaleProperty), PropertyValue::CreateSingle(*resolutionScale));
}
surface = eglCreateWindowSurface(_egl._display, _egl._config, reinterpret_cast<IInspectable*>(surfaceCreationProperties), surfaceAttributes);
EGLint err = eglGetError();
if (surface == EGL_NO_SURFACE)
{
throw Exception::CreateException(E_FAIL, L"Failed to create EGL surface");
}
return surface;
}
where
const wchar_t EGLNativeWindowTypeProperty[] = L"EGLNativeWindowTypeProperty";
const wchar_t EGLRenderSurfaceSizeProperty[] = L"EGLRenderSurfaceSizeProperty";
const wchar_t EGLRenderResolutionScaleProperty[] = L"EGLRenderResolutionScaleProperty";
I have tried changing the cast of the EGLNativeWindowType argument (as in How to create EGLSurface using C++/WinRT and ANGLE?) - that only creates other problems. As indicated, this code does work to create a surface in the basic case, just not when using the EGLRenderResolutionScaleProperty.
My guess is that something about the way I'm supplying that property is failing, because it fails on what should be reasonable values (e.g., 1.0).

Solved this by first checking that swapChainPanel size is not zero:
void MyClass::CreateRenderSurface()
{
if (mRenderSurface == EGL_NO_SURFACE)
{
if (0 == mSwapChainPanel->ActualHeight || 0 == mSwapChainPanel->ActualWidth)
{
mRenderSurface = CreateSurface(mSwapChainPanel, nullptr, &resolutionScale);
}
}
}
(The code checks elsewhere whether the render surface has been created, and will call this again if needed.)
Interestingly, the original code that used nullptr for both size and resolution arguments (case 1 in original snippet above) didn't need that check.

Related

How to display microbit's orientation as a 3d model in processing?

I am working on a project with my rocketry club.
I want to have a microbit control the orientation of the fins to auto-stabilize the rocket.
But first, I tried to make a processing code to display in real-time my micro-bit's orientation using its integrated gyroscope.
Here's my processing code :
import processing.serial.*; // import the serial library
Serial myPort; // create a serial object
float xRot = 0; // variables to store the rotation angles
float yRot = 0;
float zRot = 0;
String occ[];
void setup() {
size(400, 400, P3D); // set the size of the window and enable 3D rendering
String portName = Serial.list()[0]; // get the name of the first serial port
myPort = new Serial(this, portName, 115200); // open a connection to the serial port
println((Object[]) Serial.list());
}
void draw() {
background(255); // clear the screen
translate(width/2, height/2, 0); // center the cube on the screen
rotateX(xRot); // apply the rotations
rotateZ(yRot);
rotateY(zRot);
fill(200); // set the fill color
box(100); // draw the cube
}
void serialEvent(Serial myPort) {
// this function is called whenever new data is available
// read the incoming data from the serial port
String data = myPort.readStringUntil('\n'); // read the data as a string
// print the incoming data to the console if it is not an empty string
if (data != null) {
println(data);
}
delay(10);
if (data != null) {
// split the string into separate values
String[] values = split(data, ',');
// convert the values to floats and store them in the rotation variables
xRot = radians(float(values[0]));
yRot = radians(float(values[1]));
zRot = radians(float(values[2]));
}
}
and here's the python code I have on my microbit
pitch = 0
roll = 0
x = 0
y = 0
z = 0
def on_forever():
global pitch, roll, x, y, z
pitch = input.rotation(Rotation.PITCH) + 90
roll = input.rotation(Rotation.ROLL) + 90
servos.P2.set_angle(pitch)
servos.P1.set_angle(roll)
x = input.rotation(Rotation.PITCH)
y = input.rotation(Rotation.ROLL)
z = 1
serial.set_baud_rate(BaudRate.BAUD_RATE115200)
serial.write_string(str(x) + "," + str(y) + "," + str(z) + "\n")
basic.forever(on_forever)
What happens when I run my code is that a cube appears and rotates weirdly for a short time, then, the cube stops and processing prints "Error, disabling serialEvent() for COM5 null".
Please help me out, I really need this code to be working!
Is this the documentation for the micro:bit Python API you're using ?
input.rotation (as the reference mentions), returns accelerometer data:
a number that means how much the micro:bit is tilted in the direction you ask for. This is a value in degrees between -180 to 180 in either the Rotation.Pitch or the Rotation.Roll direction of rotation.
I'd start with a try/catch block to get more details on the actual error.
e.g. is it the actual serial communication (e.g. resetting the baud rate over and over again (serial.set_baud_rate(BaudRate.BAUD_RATE115200)) instead of once) or optimistically assuming there will be 0 errors in serial communication and the string will always split to at least 3 values.
Unfortunately I won't have the resources to test with an actual device, so the following code might contain errors, but hopefully it illustrates some of the ideas.
I'd try simplifying/minimising the amount of data used on the micropython (and setting the baud rate once) and removing the need to read accelerometer data twice in the same iteration. If z is always 1 it can probably be ignored (you always rotate by 1 degree in Processing if necessary):
pitch = 0
roll = 0
x = 0
y = 0
def on_forever():
global pitch, roll, x, y
x = input.rotation(Rotation.PITCH)
y = input.rotation(Rotation.ROLL)
pitch = x + 90
roll = y + 90
servos.P2.set_angle(pitch)
servos.P1.set_angle(roll)
serial.write_string(str(x) + "," + str(y) + "\n")
serial.set_baud_rate(BaudRate.BAUD_RATE115200)
basic.forever(on_forever)
On the processing side, I'd surround serial code with try/catch just in case anything went wrong and double check every step of the string parsing process:
import processing.serial.*; // import the serial library
Serial myPort; // create a serial object
float xRot = 0; // variables to store the rotation angles
float yRot = 0;
void setup() {
size(400, 400, P3D); // set the size of the window and enable 3D rendering
String portNames = Serial.list();
println(portNames);
String portName = portNames[0]; // get the name of the first serial port
try {
myPort = new Serial(this, portName, 115200); // open a connection to the serial port
myPort.bufferUntil('\n');
} catch (Exception e){
println("error opening serial port " + portName + "\ndouble check USB is connected and the port isn't already open in another app")
e.printStackTrace();
}
}
void draw() {
background(255); // clear the screen
translate(width/2, height/2, 0); // center the cube on the screen
rotateX(xRot); // apply the rotations
rotateZ(yRot);
fill(200); // set the fill color
box(100); // draw the cube
}
void serialEvent(Serial myPort) {
try{
// this function is called whenever new data is available
// read the incoming data from the serial port
String data = myPort.readString(); // read the data as a string
// print the incoming data to the console if it is not an empty string
if (data != null) {
println(data);
// cleanup / remove whitespace
data = data.trim();
// split the string into separate values
String[] values = split(data, ',');
if (values.length >= 2){
// convert the values to floats and store them in the rotation variables
xRot = radians(float(values[0]));
yRot = radians(float(values[1]));
}else{
println("received unexpected number of values");
printArray(values);
}
}
} catch (Exception e){
println("error reading serial:");
e.printStackTrace();
}
}
(If the above still produces serial errors I'd also write a separate test that doesn't use the servos, just in case internally some servo pwm timing/interrupts alongside accelerometer data polling interferes with serial communication for some reason.)
(AFAIK there is no full IMU support on the micro::bit (e.g. accelerometer + gyroscope + magnetometer (+ ideally sensor fusion)), just accelerometer + magnetometer. If you want to get basic rotation data, but don't care for full 3D orientation of the device should suffice. Otherwise you'd need an IMU (e.g BNO055 or newer) which you can connect to the micro via I2C (but will also probably need to implement the communications protocol to the sensor if someone else hasn't done so already).(In theory I see there's Python support for the Nordic nRF52840 chipset, however microbit uses nRF51822 for v1 and nRF52833 for v2 :/). Depending on your application you might want to switch to a different microcontroller altogether. (for example the official Arduino 33 BLE has a built-in accelerometer (and Python support) (and even supports TensorFlow Lite))
Since I do not have a microbit I tested your Processing code using an Arduino UNO at 9600 baud rate. The following serialEvent() function runs without error:
void serialEvent(Serial myPort) {
String data = myPort.readStringUntil('\n');
if (data != null) {
String inStr = trim(data);
String[] values = split(inStr, ',');
printArray(values);
// convert the values to floats and store them in the rotation variables
xRot = radians(float(values[0]));
yRot = radians(float(values[1]));
zRot = radians(float(values[2]));
}
}
Arduino code:
byte val1 = 0;
byte val2 = 0;
byte val3 = 0;
void setup() {
Serial.begin(9600);
}
void loop() {
val1 += 8;
val2 += 4;
val3 += 2;
String s = String(String(val1) + "," + String(val2) + "," + String(val3));
Serial.println(s);
}
Thank you very much George Profenza, it works perfectly well now!
I just had to fix some errors in the code.
Here's the functional code if anyone has this problem later :
import processing.serial.*; // import the serial library
Serial myPort; // create a serial object
float xRot = 0; // variables to store the rotation angles
float yRot = 0;
void setup() {
size(400, 400, P3D); // set the size of the window and enable 3D rendering
String portNames[] = Serial.list();
printArray(portNames);
String portName = portNames[0]; // get the name of the first serial port
try {
myPort = new Serial(this, portName, 115200); // open a connection to the serial port
myPort.bufferUntil('\n');
}
catch (Exception e) {
println("error opening serial port " + portName + "\ndouble check USB is connected and the port isn't already open in another app");
e.printStackTrace();
}
}
void draw() {
background(255); // clear the screen
translate(width/2, height/2, 0); // center the cube on the screen
rotateX(xRot); // apply the rotations
rotateZ(yRot);
fill(200); // set the fill color
box(100); // draw the cube
}
void serialEvent(Serial myPort) {
try {
// this function is called whenever new data is available
// read the incoming data from the serial port
String data = myPort.readString(); // read the data as a string
// print the incoming data to the console if it is not an empty string
if (data != null) {
println(data);
// cleanup / remove whitespace
data = data.trim();
// split the string into separate values
String[] values = split(data, ',');
if (values.length >= 2) {
// convert the values to floats and store them in the rotation variables
xRot = radians(float(values[0]));
yRot = radians(float(values[1]));
} else {
println("received unexpected number of values");
printArray(values);
}
}
}
catch (Exception e) {
println("error reading serial:");
e.printStackTrace();
}
}
As for the Micro:Bit, I didn't have to change the code at all...
I assume their was an error with the Z axis since we basically just removed removed that...

How to simply render ID3D11Texture2D

I am doing a D2D/D3D interoperability program. After getting ID3D11Texture2D from the outside, it is rendered in a designated area. I tried CreateDxgiSurfaceRenderTarget but no effect
code below, The code runs "normally", there are no errors and debugging information is displayed, But the interface shows a black screen. If you ignore the param texture, changing to only _back_render_target->FillRectange is effective
HRESULT CGraphRender::DrawTexture(ID3D11Texture2D* texture, const RECT& dst_rect)
{
float dpi = GetDpiFromD2DFactory(_d2d_factory);
CComPtr<ID3D11Texture2D> temp_texture2d;
CComPtr<ID2D1RenderTarget> temp_render_target;
CComPtr<ID2D1Bitmap> temp_bitmap;
D3D11_TEXTURE2D_DESC desc = { 0 };
texture->GetDesc(&desc);
CD3D11_TEXTURE2D_DESC capture_texture_desc(DXGI_FORMAT_B8G8R8A8_UNORM, desc.Width, desc.Height, 1, 1, D3D11_BIND_RENDER_TARGET);
HRESULT hr = _d3d_device->CreateTexture2D(&capture_texture_desc, nullptr, &temp_texture2d);
RETURN_ON_FAIL(hr);
CComPtr<IDXGISurface> dxgi_capture;
hr = temp_texture2d->QueryInterface(IID_PPV_ARGS(&dxgi_capture));
RETURN_ON_FAIL(hr);
D2D1_RENDER_TARGET_PROPERTIES rt_props = D2D1::RenderTargetProperties(D2D1_RENDER_TARGET_TYPE_DEFAULT, D2D1::PixelFormat(DXGI_FORMAT_B8G8R8A8_UNORM, D2D1_ALPHA_MODE_PREMULTIPLIED), dpi, dpi);
hr = _d2d_factory->CreateDxgiSurfaceRenderTarget(dxgi_capture, rt_props, &temp_render_target);
RETURN_ON_FAIL(hr);
D2D1_BITMAP_PROPERTIES bmp_prop = D2D1::BitmapProperties(D2D1::PixelFormat(DXGI_FORMAT_B8G8R8A8_UNORM, D2D1_ALPHA_MODE_PREMULTIPLIED), dpi, dpi);
hr = temp_render_target->CreateBitmap(D2D1::SizeU(desc.Width, desc.Height), bmp_prop, &temp_bitmap);
RETURN_ON_FAIL(hr);
CComPtr<ID3D11DeviceContext> immediate_context;
_d3d_device->GetImmediateContext(&immediate_context);
if (!immediate_context)
{
return E_UNEXPECTED;
}
immediate_context->CopyResource(temp_texture2d, texture);
D2D1_POINT_2U src_point = D2D1::Point2U();
D2D1_RECT_U src_rect = D2D1::RectU(0, 0, desc.Width, desc.Height);
hr = temp_bitmap->CopyFromRenderTarget(&src_point, temp_render_target, &src_rect);
RETURN_ON_FAIL(hr);
D2D1_RECT_F d2d1_rect = { (float)dst_rect.left, (float)dst_rect.top, (float)dst_rect.right, (float)dst_rect.bottom};
_back_render_target->DrawBitmap(temp_bitmap, d2d1_rect);
return S_OK;
}
Refer to DirectXTK, DirectXTK's SpritBatch code has a brief look. It is necessary to introduce a bunch of context settings. I don’t understand the relationship yet. I don’t know. Does the existing code have any influence, such as setting TargetView, ViewPort or even Shader.
Is there a relatively simple and effective method, as simple and violent as ID2D1RenderTarget's DrawBitmap?

Use a texture array as Direct2D surface render target

I try to create a Direct3D 11 texture array holding multiple pages of text rendered using DirectWrite and Direct2D. Suppose layout holds the IDWriteTextLayouts for the individual pages, then I try to do the following:
{
D3D11_TEXTURE2D_DESC desc;
::ZeroMemory(&desc, sizeof(desc));
desc.ArraySize = static_cast<UINT>(layouts.size());
desc.BindFlags = D3D11_BIND_RENDER_TARGET;
desc.Format = DXGI_FORMAT_B8G8R8A8_UNORM;
desc.Height = height;
desc.MipLevels = 1;
desc.SampleDesc.Count = 1;
desc.Usage = D3D11_USAGE_DEFAULT;
desc.Width = width;
auto hr = this->_d3dDevice->CreateTexture2D(&desc, nullptr, &retval.Texture);
if (FAILED(hr)) {
throw std::system_error(hr, com_category());
}
}
for (auto &l : layouts) {
ATL::CComPtr<IDXGISurface> surface;
{
auto hr = retval.Texture->QueryInterface(&surface);
if (FAILED(hr)) {
// The code fails here with E_NOINTERFACE "No such interface supported."
throw std::system_error(hr, com_category());
}
}
// Go on creating the RT from 'surface'.
}
The problem is that the code fails at the designated line where I try to obtain the IDXGISurface interface from the ID3D11Texture2D if there is more than one page (desc.ArraySize > 1). I eventually found in the documentation (https://learn.microsoft.com/en-us/windows/win32/api/dxgi/nn-dxgi-idxgisurface) that this is by deisgn:
If the 2D texture [...] does not consist of an array of textures, QueryInterface succeeds and returns a pointer to the IDXGISurface interface pointer. Otherwise, QueryInterface fails and does not return the pointer to IDXGISurface.
Is there any other way to obtain the individual DXGI surfaces in the texture array to draw to them one after the other using Direct2D?
As I could not find any way to address the sub-surfaces, I now create a staging texture with one layer to render to and copy the result into the texture array using ID3D11DeviceContext::CopySubresourceRegion.
How about Texture => IDXGIResource1 => CreateSubresourceSurface ?

Scale UI for multiple resolutions/different devices

I have a quite simple unity GUI that has the following scheme :
Where Brekt and so are buttons.
The GUI works just fine on PC and is on screen space : overlay so it is supposed to be adapted automatically to fit every screen.
But on tablet the whole GUI is smaller and reduced in the center of the screen, with huge margins around the elements (can't join a screenshot now)
What is the way to fix that? Is it something in player settings or in project settings?
Automatically scaling the UI requires using combination of anchor,pivot point of RecTransform and the Canvas Scaler component. It is hard to understand it without images or videos. It is very important that you thoroughly understand how to do this and Unity provided full video tutorial for this.You can watch it here.
Also, when using scrollbar, scrollview and other similar UI controls, the ContentSizeFitter component is also used to make sure they fit in that layout.
There is a problem with MovementRange. We must scale this value too.
I did it so:
public int MovementRange = 100;
public AxisOption axesToUse = AxisOption.Both; // The options for the axes that the still will use
public string horizontalAxisName = "Horizontal"; // The name given to the horizontal axis for the cross platform input
public string verticalAxisName = "Vertical"; // The name given to the vertical axis for the cross platform input
private int _MovementRange = 100;
Vector3 m_StartPos;
bool m_UseX; // Toggle for using the x axis
bool m_UseY; // Toggle for using the Y axis
CrossPlatformInputManager.VirtualAxis m_HorizontalVirtualAxis; // Reference to the joystick in the cross platform input
CrossPlatformInputManager.VirtualAxis m_VerticalVirtualAxis; // Reference to the joystick in the cross platform input
void OnEnable()
{
CreateVirtualAxes();
}
void Start()
{
m_StartPos = transform.position;
Canvas c = GetComponentInParent<Canvas>();
_MovementRange = (int)(MovementRange * c.scaleFactor);
Debug.Log("Range:"+ _MovementRange);
}
void UpdateVirtualAxes(Vector3 value)
{
var delta = m_StartPos - value;
delta.y = -delta.y;
delta /= _MovementRange;
if (m_UseX)
{
m_HorizontalVirtualAxis.Update(-delta.x);
}
if (m_UseY)
{
m_VerticalVirtualAxis.Update(delta.y);
}
}
void CreateVirtualAxes()
{
// set axes to use
m_UseX = (axesToUse == AxisOption.Both || axesToUse == AxisOption.OnlyHorizontal);
m_UseY = (axesToUse == AxisOption.Both || axesToUse == AxisOption.OnlyVertical);
// create new axes based on axes to use
if (m_UseX)
{
m_HorizontalVirtualAxis = new CrossPlatformInputManager.VirtualAxis(horizontalAxisName);
CrossPlatformInputManager.RegisterVirtualAxis(m_HorizontalVirtualAxis);
}
if (m_UseY)
{
m_VerticalVirtualAxis = new CrossPlatformInputManager.VirtualAxis(verticalAxisName);
CrossPlatformInputManager.RegisterVirtualAxis(m_VerticalVirtualAxis);
}
}
public void OnDrag(PointerEventData data)
{
Vector3 newPos = Vector3.zero;
if (m_UseX)
{
int delta = (int)(data.position.x - m_StartPos.x);
delta = Mathf.Clamp(delta, -_MovementRange, _MovementRange);
newPos.x = delta;
}
if (m_UseY)
{
int delta = (int)(data.position.y - m_StartPos.y);
delta = Mathf.Clamp(delta, -_MovementRange, _MovementRange);
newPos.y = delta;
}
transform.position = new Vector3(m_StartPos.x + newPos.x, m_StartPos.y + newPos.y, m_StartPos.z + newPos.z);
UpdateVirtualAxes(transform.position);
}

ttf Text wont show up in SDL

No matter what i try i cant get my text to load into a texture in SDL 2.0 using SDL_ttf.
Here is my textToTexture code
void sdlapp::textToTexture(string text, SDL_Color textColor,SDL_Texture* textTexture)
{
//free prevoius texture in textTexture if texture exists
if (textTexture != nullptr || NULL)
{
SDL_DestroyTexture(textTexture);
}
SDL_Surface* textSurface = TTF_RenderText_Solid(m_font, text.c_str(), textColor);
textTexture = SDL_CreateTextureFromSurface(m_renderer, textSurface);
//free surface
SDL_FreeSurface(textSurface);
}
And then here is me loading the texture and text
bool sdlapp::loadMedia()
{
bool success = true;
//load media here
//load font
m_font = TTF_OpenFont("Fonts/MotorwerkOblique.ttf", 28);
//load text
SDL_Color textColor = { 0x255, 0x255, 0x235 };
textToTexture("im a texture thing", textColor, m_font_texture);
return success;
}
And then this is the code i am using to render it
void sdlapp::render()
{
//clear the screen
SDL_RenderClear(m_renderer);
//do render stuff here
SDL_Rect rect= { 32, 64, 128, 32 };
SDL_RenderCopy(m_renderer, m_font_texture, NULL, NULL);
//update the screen to the current render
SDL_RenderPresent(m_renderer);
}
Does anyone know what i am doing wrong?
Thanks in Advance, JustinWeq.
textToTexture renders the text with SDL_ttf, the resulting SDL_Texture address is then assigned to a variable called textTexture. Problem is, textTexture is a local variable pointing to the same address as m_font_texture. They're not the same variable, they're different variables poiting to the same place, thus you're not changing any callee variables.
For clarification on pointers, I'd recommend seeing question 4.8 of the C-FAQ
I'd make textToTexture return the new texture address, and don't bother freeing resources that are not managed by it (m_font_texture belongs to sdlapp, it should be managed by it).

Resources